NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea – Evaluation Review, 2011
Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…
Descriptors: Research Design, Natural Disasters, Foreign Countries, Early Childhood Education
Peer reviewed Peer reviewed
Direct linkDirect link
Emery, Sherry; Lee, Jungwha; Curry, Susan J.; Johnson, Tim; Sporer, Amy K.; Mermelstein, Robin; Flay, Brian; Warnecke, Richard – Evaluation Review, 2010
Background: Surveys of community-based programs are difficult to conduct when there is virtually no information about the number or locations of the programs of interest. This article describes the methodology used by the Helping Young Smokers Quit (HYSQ) initiative to identify and profile community-based youth smoking cessation programs in the…
Descriptors: Smoking, Research Methodology, Community Programs, Community Surveys
Peer reviewed Peer reviewed
Hedrick, Terry E.; Shipman, Stephanie L. – Evaluation Review, 1988
Changes made in 1981 to the Aid to Families with Dependent Children (AFDC) program under the Omnibus Budget Reconciliation Act were evaluated. Multiple quasi-experimental designs (interrupted time series, non-equivalent comparison groups, and simple pre-post designs) used to address evaluation questions illustrate the issues faced by evaluators in…
Descriptors: Evaluation Methods, Program Evaluation, Quasiexperimental Design, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter; Burghardt, John – Evaluation Review, 2007
This article discusses the use of propensity scoring in experimental program evaluations to estimate impacts for subgroups defined by program features and participants' program experiences. The authors discuss estimation issues and provide specification tests. They also discuss the use of an overlooked data collection design--obtaining predictions…
Descriptors: Program Effectiveness, Scoring, Experimental Programs, Control Groups
Peer reviewed Peer reviewed
Heath, Linda; And Others – Evaluation Review, 1982
A problem for program evaluators involves a search for ways to maximize internal validity and inferential power of research designs while being able to assess long-term effects of social programs. A multimethodological research strategy combining a delayed control group true experiment with a multiple time series and switching replications design…
Descriptors: Control Groups, Evaluation Methods, Intervention, Program Evaluation
Peer reviewed Peer reviewed
Chelimsky, Eleanor – Evaluation Review, 1985
Four aspects of the relationship between auditing and evaluation in their approaches to program assessment are examined: (1) their different origins; (2) the definitions and purposes of both, and the questions they seek to answer; (3) contrasting viewpoints and emphases of auditors and evaluators; and (4) commonalities of interest and potential…
Descriptors: Accountability, Accounting, Data Analysis, Evaluation Methods
Peer reviewed Peer reviewed
St.Pierre, Robert G. – Evaluation Review, 1980
Factors that influence the sample size necessary for longitudinal evaluations include the nature of the evaluation questions, nature of available comparison groups, consistency of the treatment in different sites, effect size, attrition rate, significance level for statistical tests, and statistical power. (Author/GDC)
Descriptors: Evaluation Methods, Field Studies, Influences, Longitudinal Studies
Peer reviewed Peer reviewed
Chen, Huey-Tsyh; Rossi, Peter H. – Evaluation Review, 1983
The use of theoretical models in impact assessment can heighten the power of experimental designs and compensate for some deficiencies of quasi-experimental designs. Theoretical models of implementation processes are examined, arguing that these processes are a major obstacle to fully effective programs. (Author/CM)
Descriptors: Evaluation Criteria, Evaluation Methods, Models, Program Evaluation
Peer reviewed Peer reviewed
Heilman, John G. – Evaluation Review, 1980
The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)
Descriptors: Evaluation Methods, Evaluative Thinking, Models, Program Evaluation
Peer reviewed Peer reviewed
Stillman, Frances; Hartman, Anne; Graubard, Barry; Gilpin, Elizabeth; Chavis, David; Garcia, John; Wun, Lap-Ming; Lynn, William; Manley, Marc – Evaluation Review, 1999
Describes the conceptual design, research framework, evaluation components, and analytic strategies that are guiding the evaluation of a demonstration-research effort, the American Stop Smoking Intervention Study (ASSIST). The ASSIST evaluation is a unique analysis of the relationships among social context, public-health activity, tobacco use, and…
Descriptors: Behavior Patterns, Context Effect, Evaluation Methods, Intervention
Peer reviewed Peer reviewed
Bloom, Howard S. – Evaluation Review, 1987
This article presents lessons learned from an innovative employment and training program for dislocated workers. It provides specific information about program design and serves as a prototype for how social experimentation can be used by state and local governments. (Author/LMO)
Descriptors: Adults, Dislocated Workers, Employment Practices, Program Evaluation
Peer reviewed Peer reviewed
Snow, Robert E.; And Others – Evaluation Review, 1986
Employing a split-half design, this research examines the effects of prior letters, in conjunction with a follow-up telephone survey, on three factors affecting evaluation results: contact rates, response rates, and respondent cooperation. Prior letters did not increase contact, improve cooperation, or decrease refusals. (Author/LMO)
Descriptors: Adults, Interviews, Job Training, Letters (Correspondence)
Peer reviewed Peer reviewed
Horn, Wade F. – Evaluation Review, 1982
In an overview of single-case methodology, the potential utility of A-B-A and multiple baseline designs for evaluating social programs is discussed. Validity factors and cost-effectiveness are considered, showing that these designs are viable alternative methods where traditional randomized group designs are infeasible. (Author/CM)
Descriptors: Case Studies, Cost Effectiveness, Multivariate Analysis, Program Evaluation
Peer reviewed Peer reviewed
Wortman, Paul M.; Marans, Robert W. – Evaluation Review, 1987
The concept of "preevaluative research" is examined in the context of a museum exhibition evaluation. It is viewed as distinct from an evaluability assessment. The exhibit preevaluative study indicates that instrumentation and implementation issues are likely to benefit from such activities, but that design and analysis may suffer.…
Descriptors: Arts Centers, High Schools, Interviews, Program Evaluation
Peer reviewed Peer reviewed
Severy, Lawrence J.; Whitaker, J. Michael – Evaluation Review, 1982
The desirability of combining tests of theory with evaluations of treatment modalities is argued in an investigation of the effectiveness of a juvenile diversion program. Using a true experimental design (with randomization), recidivism analyses dependent on court record data failed to demonstrate the relative superiority of any of three treatment…
Descriptors: Delinquent Rehabilitation, Experimental Groups, Hypothesis Testing, Measurement Techniques
Previous Page | Next Page ยป
Pages: 1  |  2  |  3