NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Litwok, Daniel; Peck, Laura R. – American Journal of Evaluation, 2019
In experimental evaluations of policy interventions, the so-called Bloom adjustment is commonly used to estimate the impact of the treatment on the treated. It does so by rescaling the estimated impact of the intention to treat--that is, the overall treatment-control group difference in outcomes for the entire experimental sample--by the…
Descriptors: Computation, Outcomes of Treatment, Program Evaluation, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Peck, Laura R. – American Journal of Evaluation, 2013
Researchers and policy makers are increasingly dissatisfied with the "average treatment effect." Not only are they interested in learning about the overall causal effects of policy interventions, but they want to know what specifically it is about the intervention that is responsible for any observed effects. In the U.S., using…
Descriptors: Policy, Intervention, Policy Analysis, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Stephen H.; Peck, Laura R. – American Journal of Evaluation, 2013
To answer "what works?" questions about policy interventions based on an experimental design, Peck (2003) proposes to use baseline characteristics to symmetrically divide treatment and control group members into subgroups defined by endogenously determined postrandom assignment events. Symmetric prediction of these subgroups in both…
Descriptors: Program Effectiveness, Experimental Groups, Control Groups, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Paul J.; Russ-Eft, Darlene F.; Taylor, Hazel – American Journal of Evaluation, 2009
We tested for inflationary bias introduced through retrospective pretests by analyzing traditional pretest, retrospective pretest, and posttest evaluation data collected on a first-line supervisory leadership training program, involving 196 supervisors and their subordinates, across 17 organizational settings. Retrospective pretest ratings by both…
Descriptors: Program Evaluation, Pretests Posttests, Leadership Training, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Peck, Laura R. – American Journal of Evaluation, 2007
This article uses propensity scores to identify subgroups of individuals most likely to experience a reduction in cash benefits because of sanctions in some of the programs that make up the National Evaluation of Welfare-to-Work Strategies. It extends program evaluation methodology by using propensity scoring to identify the subgroups of…
Descriptors: Program Evaluation, Control Groups, Welfare Recipients, Research Design
Peer reviewed Peer reviewed
McCall, Robert B.; Ryan, Carey S.; Green, Beth L. – American Journal of Evaluation, 1999
Outlines some nonrandomized constructed comparison strategies that can be used to evaluate interventions for children and illustrates their use for outcome variables that would be expected to change over age if no treatment were given. The proposed strategy consists of determining an expected age function for the dependent variable using pretest…
Descriptors: Age Differences, Children, Comparative Analysis, Control Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Bamberger, Michael; Rugh, Jim; Church, Mary; Fort, Lucia – American Journal of Evaluation, 2004
The paper discusses two common scenarios in which evaluators must conduct impact evaluations when working under budget, time, or data constraints. Under the first scenario the evaluator is not called in until the project is already well advanced, and there is a tight deadline for completing the evaluation, frequently combined with a limited budget…
Descriptors: Foreign Countries, Program Effectiveness, Evaluators, Control Groups