Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 5 |
Descriptor
Program Evaluation | 45 |
Research Design | 33 |
Evaluation Methods | 25 |
Research Methodology | 14 |
Quasiexperimental Design | 11 |
Research Problems | 9 |
Models | 7 |
Program Effectiveness | 7 |
Federal Programs | 6 |
Intervention | 6 |
Sampling | 5 |
More ▼ |
Source
Evaluation Review | 45 |
Author
Bickman, Leonard | 3 |
Foster, E. Michael | 2 |
Arbour, MaryCatherine | 1 |
Baldwin, Kimm | 1 |
Barata, Clara | 1 |
Bergquist, Constance C. | 1 |
Bloom, Howard S. | 1 |
Boruch, Robert F. | 1 |
Bowen, Gary L. | 1 |
Burghardt, John | 1 |
Chavis, David | 1 |
More ▼ |
Publication Type
Journal Articles | 45 |
Reports - Research | 21 |
Reports - Evaluative | 17 |
Information Analyses | 2 |
Opinion Papers | 2 |
Reports - Descriptive | 2 |
Speeches/Meeting Papers | 2 |
Guides - Non-Classroom | 1 |
Reports - General | 1 |
Education Level
Adult Education | 1 |
Early Childhood Education | 1 |
Audience
Laws, Policies, & Programs
Comprehensive Employment and… | 4 |
Aid to Families with… | 2 |
Elementary and Secondary… | 1 |
Elementary and Secondary… | 1 |
Job Training Partnership Act… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Rebecca Walcott; Isabelle Cohen; Denise Ferris – Evaluation Review, 2024
When and how to survey potential respondents is often determined by budgetary and external constraints, but choice of survey modality may have enormous implications for data quality. Different survey modalities may be differentially susceptible to measurement error attributable to interviewer assignment, known as interviewer effects. In this…
Descriptors: Surveys, Research Methodology, Error of Measurement, Interviews
Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea – Evaluation Review, 2011
Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…
Descriptors: Research Design, Natural Disasters, Foreign Countries, Early Childhood Education
Emery, Sherry; Lee, Jungwha; Curry, Susan J.; Johnson, Tim; Sporer, Amy K.; Mermelstein, Robin; Flay, Brian; Warnecke, Richard – Evaluation Review, 2010
Background: Surveys of community-based programs are difficult to conduct when there is virtually no information about the number or locations of the programs of interest. This article describes the methodology used by the Helping Young Smokers Quit (HYSQ) initiative to identify and profile community-based youth smoking cessation programs in the…
Descriptors: Smoking, Research Methodology, Community Programs, Community Surveys
Foster, E. Michael; Wiley-Exley, Elizabeth; Bickman, Leonard – Evaluation Review, 2009
Findings from an evaluation of a model system for delivering mental health services to youth were reassessed to determine the robustness of key findings to the use of methodologies unavailable to the original analysts. These analyses address a key concern about earlier findings--that the quasi-experimental design involved the comparison of two…
Descriptors: Mental Health Programs, Health Services, Youth, Delivery Systems
Schochet, Peter; Burghardt, John – Evaluation Review, 2007
This article discusses the use of propensity scoring in experimental program evaluations to estimate impacts for subgroups defined by program features and participants' program experiences. The authors discuss estimation issues and provide specification tests. They also discuss the use of an overlooked data collection design--obtaining predictions…
Descriptors: Program Effectiveness, Scoring, Experimental Programs, Control Groups

Hedrick, Terry E.; Shipman, Stephanie L. – Evaluation Review, 1988
Changes made in 1981 to the Aid to Families with Dependent Children (AFDC) program under the Omnibus Budget Reconciliation Act were evaluated. Multiple quasi-experimental designs (interrupted time series, non-equivalent comparison groups, and simple pre-post designs) used to address evaluation questions illustrate the issues faced by evaluators in…
Descriptors: Evaluation Methods, Program Evaluation, Quasiexperimental Design, Research Design

Dennis, Michael L.; Boruch, Robert F. – Evaluation Review, 1989
Determining which program evaluations in developing countries are appropriate/feasible for randomized experiments to plan and evaluate social programs is discussed. Five threshold conditions are defined. Reviews of experiments from Barbados, China, Colombia, Kenya, Israel, Nicaragua, Pakistan, Taiwan, and the United States illustrate the issues…
Descriptors: Cross Cultural Studies, Developing Nations, Developmental Programs, Evaluation Methods

Dwyer, James H. – Evaluation Review, 1984
A solution to the problem of specification error due to excluded variables in statistical models of treatment effects in nonrandomized (nonequivalent) control group designs is presented. It involves longitudinal observation with at least two pretests. A maximum likelihood estimation program such as LISREL may provide reasonable estimates of…
Descriptors: Control Groups, Mathematical Models, Maximum Likelihood Statistics, Monte Carlo Methods

Heath, Linda; And Others – Evaluation Review, 1982
A problem for program evaluators involves a search for ways to maximize internal validity and inferential power of research designs while being able to assess long-term effects of social programs. A multimethodological research strategy combining a delayed control group true experiment with a multiple time series and switching replications design…
Descriptors: Control Groups, Evaluation Methods, Intervention, Program Evaluation

Chelimsky, Eleanor – Evaluation Review, 1985
Four aspects of the relationship between auditing and evaluation in their approaches to program assessment are examined: (1) their different origins; (2) the definitions and purposes of both, and the questions they seek to answer; (3) contrasting viewpoints and emphases of auditors and evaluators; and (4) commonalities of interest and potential…
Descriptors: Accountability, Accounting, Data Analysis, Evaluation Methods

Nagel, Stuart S. – Evaluation Review, 1983
New, simple methods for applying benefit-cost analysis in situations where the variables are nonmonetary are discussed. The approach involves handling nonmonetary variables by converting the problems into questions regarding whether a nonmonetary return is worth more or less than a given dollar cost. The article codifies what good decision makers…
Descriptors: Cost Effectiveness, Decision Making, Evaluation Methods, Program Design

St.Pierre, Robert G. – Evaluation Review, 1980
Factors that influence the sample size necessary for longitudinal evaluations include the nature of the evaluation questions, nature of available comparison groups, consistency of the treatment in different sites, effect size, attrition rate, significance level for statistical tests, and statistical power. (Author/GDC)
Descriptors: Evaluation Methods, Field Studies, Influences, Longitudinal Studies

Chen, Huey-Tsyh; Rossi, Peter H. – Evaluation Review, 1983
The use of theoretical models in impact assessment can heighten the power of experimental designs and compensate for some deficiencies of quasi-experimental designs. Theoretical models of implementation processes are examined, arguing that these processes are a major obstacle to fully effective programs. (Author/CM)
Descriptors: Evaluation Criteria, Evaluation Methods, Models, Program Evaluation

Heilman, John G. – Evaluation Review, 1980
The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)
Descriptors: Evaluation Methods, Evaluative Thinking, Models, Program Evaluation

Weaver, Frances M.; And Others – Evaluation Review, 1993
Effectiveness of a new 72-hour delivery system (USXPRESS) for pharmaceuticals purchased by the Department of Veterans Affairs (VA) from VA depots was evaluated by comparing 33 test sites with 11 matched sites using a pretest posttest quasiexperimental design. The USXPRESS system reduced inventory, decreased space needs, and satisfied service…
Descriptors: Comparative Analysis, Cost Effectiveness, Delivery Systems, Pharmacy