Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 27 |
Descriptor
Program Evaluation | 31 |
Research Design | 31 |
Evaluation Methods | 20 |
Research Methodology | 14 |
Program Effectiveness | 9 |
Intervention | 8 |
Evaluation Research | 6 |
Evaluators | 6 |
Data Analysis | 5 |
Control Groups | 4 |
Evidence | 4 |
More ▼ |
Source
American Journal of Evaluation | 31 |
Author
Publication Type
Journal Articles | 31 |
Reports - Research | 12 |
Reports - Descriptive | 10 |
Reports - Evaluative | 7 |
Information Analyses | 1 |
Opinion Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Adult Education | 3 |
Elementary Secondary Education | 3 |
Higher Education | 2 |
Postsecondary Education | 2 |
Audience
Location
Germany | 2 |
Maryland | 2 |
Arizona | 1 |
California | 1 |
France | 1 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Bell, Stephen H.; Stapleton, David C.; Wood, Michelle; Gubits, Daniel – American Journal of Evaluation, 2023
A randomized experiment that measures the impact of a social policy in a sample of the population reveals whether the policy will work on average with universal application. An experiment that includes only the subset of the population that volunteers for the intervention generates narrower "proof-of-concept" evidence of whether the…
Descriptors: Public Policy, Policy Formation, Federal Programs, Social Services
Bower, Kyle L. – American Journal of Evaluation, 2022
The purpose of this paper is to introduce the Five-Level Qualitative Data Analysis (5LQDA) method for ATLAS.ti as a way to intentionally design methodological approaches applicable to the field of evaluation. To demonstrate my analytical process using ATLAS.ti, I use examples from an existing evaluation of a STEM Peer Learning Assistant program.…
Descriptors: Qualitative Research, Data Analysis, Program Evaluation, Evaluation Methods
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Zandniapour, Lily; Deterding, Nicole M. – American Journal of Evaluation, 2018
Tiered evidence initiatives are an important federal strategy to incentivize and accelerate the use of rigorous evidence in planning, implementing, and assessing social service investments. The Social Innovation Fund (SIF), a program of the Corporation for National and Community Service, adopted a public-private partnership approach to tiered…
Descriptors: Program Effectiveness, Program Evaluation, Research Needs, Evidence
Louie, Josephine; Rhoads, Christopher; Mark, June – American Journal of Evaluation, 2016
Interest in the regression discontinuity (RD) design as an alternative to randomized control trials (RCTs) has grown in recent years. There is little practical guidance, however, on conditions that would lead to a successful RD evaluation or the utility of studies with underpowered RD designs. This article describes the use of RD design to…
Descriptors: Regression (Statistics), Program Evaluation, Algebra, Supplementary Education
Solmeyer, Anna R.; Constance, Nicole – American Journal of Evaluation, 2015
Traditionally, evaluation has primarily tried to answer the question "Does a program, service, or policy work?" Recently, more attention is given to questions about variation in program effects and the mechanisms through which program effects occur. Addressing these kinds of questions requires moving beyond assessing average program…
Descriptors: Program Effectiveness, Program Evaluation, Program Content, Measurement Techniques
Ahlin, Eileen M. – American Journal of Evaluation, 2015
Evaluation research conducted in agencies that sanction law violators is often challenging and due process may preclude evaluators from using experimental methods in traditional criminal justice agencies such as police, courts, and corrections. However, administrative agencies often deal with the same population but are not bound by due process…
Descriptors: Research Methodology, Evaluation Research, Criminals, Correctional Institutions
Le Menestrel, Suzanne M.; Walahoski, Jill S.; Mielke, Monica B. – American Journal of Evaluation, 2014
The 4-H youth development organization is a complex public--private partnership between the U.S. Department of Agriculture's National Institute of Food and Agriculture, the nation's Cooperative Extension system and National 4-H Council, a private, nonprofit partner. The current article is focused on a partnership approach to the…
Descriptors: Youth Programs, Evaluators, Cooperation, Evaluation Methods
Patton, Michael Quinn – American Journal of Evaluation, 2015
Our understanding of programs is enhanced when trained, skilled, and observant evaluators go "into the field"--the real world where programs are conducted--paying attention to what's going on, systematically documenting what they see, and reporting what they learn. The article opens by presenting and illustrating twelve reasons for…
Descriptors: Program Evaluation, Evaluation Methods, Design Requirements, Field Studies
Bell, Stephen H.; Peck, Laura R. – American Journal of Evaluation, 2013
To answer "what works?" questions about policy interventions based on an experimental design, Peck (2003) proposes to use baseline characteristics to symmetrically divide treatment and control group members into subgroups defined by endogenously determined postrandom assignment events. Symmetric prediction of these subgroups in both…
Descriptors: Program Effectiveness, Experimental Groups, Control Groups, Program Evaluation
DeBarger, Angela Haydel; Penuel, William R.; Harris, Christopher J.; Kennedy, Cathleen A. – American Journal of Evaluation, 2016
Evaluators must employ research designs that generate compelling evidence related to the worth or value of programs, of which assessment data often play a critical role. This article focuses on assessment design in the context of evaluation. It describes the process of using the Framework for K-12 Science Education and Next Generation Science…
Descriptors: Intervention, Program Evaluation, Research Design, Science Tests
Mueller, Christoph Emanuel; Gaus, Hansjoerg – American Journal of Evaluation, 2015
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
Descriptors: Intervention, Research Design, Research Methodology, Program Evaluation
Does My Program Really Make a Difference? Program Evaluation Utilizing Aggregate Single-Subject Data
Burns, Catherine E. – American Journal of Evaluation, 2015
In the current climate of increasing fiscal and clinical accountability, information is required about overall program effectiveness using clinical data. These requests present a challenge for programs utilizing single-subject data due to the use of highly individualized behavior plans and behavioral monitoring. Subsequently, the diversity of the…
Descriptors: Program Evaluation, Program Effectiveness, Data Analysis, Research Design
Braverman, Marc T. – American Journal of Evaluation, 2013
Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…
Descriptors: Evaluation Methods, Methods Research, Research Methodology, Research Design
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling