Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Demonstration Programs | 5 |
Program Evaluation | 5 |
Evaluation Methods | 3 |
Case Studies | 2 |
Data Collection | 2 |
Experiments | 2 |
Labor Turnover | 2 |
Program Implementation | 2 |
Accountability | 1 |
Adults | 1 |
Capacity Building | 1 |
More ▼ |
Source
American Journal of Evaluation | 5 |
Author
Alexander, Neil | 1 |
Barnow, Burt S. | 1 |
Barth, Michael C. | 1 |
Gordon, Rachel A. | 1 |
Greenberg, David H. | 1 |
Hamilton, Gayle | 1 |
Heinrich, Carolyn J. | 1 |
Hoggart, Lesley | 1 |
Walker, Robert | 1 |
Wharton, Tracy | 1 |
Publication Type
Journal Articles | 5 |
Reports - Descriptive | 2 |
Reports - Evaluative | 2 |
Reports - Research | 1 |
Education Level
Audience
Researchers | 1 |
Location
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Wharton, Tracy; Alexander, Neil – American Journal of Evaluation, 2013
This article describes lessons learned about implementing evaluations in hospital settings. In order to overcome the methodological dilemmas inherent in this environment, we used a practical participatory evaluation (P-PE) strategy to engage as many stakeholders as possible in the process of evaluating a clinical demonstration project.…
Descriptors: Hospitals, Demonstration Programs, Program Evaluation, Evaluation Methods
Walker, Robert; Hoggart, Lesley; Hamilton, Gayle – American Journal of Evaluation, 2008
Although random assignment is generally the preferred methodology in impact evaluations, it raises numerous ethical concerns, some of which are addressed by securing participants' informed consent. However, there has been little investigation of how consent is obtained in social experiments and the amount of information that can be conveyed--and…
Descriptors: Employment Programs, Foreign Countries, Case Studies, Program Evaluation
Gordon, Rachel A.; Heinrich, Carolyn J. – American Journal of Evaluation, 2004
Government and public focus on accountability for program outcomes, combined with practical and ethical constraints on experimental designs, make nonexperimental studies of social programs an increasingly common approach to producing information on program performance. In this paper, we compare the effectiveness of alternative nonexperimental…
Descriptors: Program Effectiveness, Income, Evaluation Methods, Demonstration Programs
Barth, Michael C. – American Journal of Evaluation, 2004
Demonstration programs and social experiments are often subject to sophisticated, controlled evaluations. An important factor that is not subject to control, and sometimes even goes unobserved, is overall program site quality. Site quality can be observed in process evaluations, but these tend to be expensive. This paper describes an alternative…
Descriptors: Program Implementation, Demonstration Programs, Site Selection, Site Analysis