NotesFAQContact Us
Collection
Advanced
Search Tips
Source
American Journal of Evaluation523
What Works Clearinghouse Rating
Showing 1 to 15 of 523 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jessica Sperling; Megan Gray; Victoria Lee; Lorrie Schmid; Nicholas Malinowski – American Journal of Evaluation, 2025
There has been increased focus within the evaluation field on the value of community-engaged research (CER) methods; however, CER is not easily seen as compatible with experimental evaluation methods (or randomized controlled trial, RCT). For instance, in an experimental design, researchers leverage randomization to create a counterfactual; this…
Descriptors: Community Involvement, Research Methodology, Randomized Controlled Trials, After School Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Huey T. Chen; Liliana Morosanu; Victor H. Chen – American Journal of Evaluation, 2024
Most program evaluation efforts concentrate on assessments of "program implementation" and "program outcomes." However, another area of programs that has not received sufficient attention in the literature is evaluating the plan of the program. Since the quality of the plan and planning process can influence program…
Descriptors: Program Evaluation, Evaluation Methods, Evaluation Research, Planning
Peer reviewed Peer reviewed
Direct linkDirect link
Debbie L. Hahs-Vaughn; Christine Depies DeStefano; Christopher D. Charles; Mary Little – American Journal of Evaluation, 2025
Randomized experiments are a strong design for establishing impact evidence because the random assignment mechanism theoretically allows confidence in attributing group differences to the intervention. Growth of randomized experiments within educational studies has been widely documented. However, randomized experiments within education have…
Descriptors: Educational Research, Randomized Controlled Trials, Research Problems, Educational Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Rebecca H. Woodland; Rebecca Mazur – American Journal of Evaluation, 2024
Logic modeling, the process that explicates how programs are constructed and theorized to bring about change, is considered to be standard evaluation practice. However, logic modeling is often experienced as a transactional, jargon-laden, discrete task undertaken to produce a document to comply with the expectations of an external entity, the…
Descriptors: Evaluation Research, Evaluation Methods, Program Evaluation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Sara E. North – American Journal of Evaluation, 2024
A method called multi-attribute utility analysis (MAUA) provides a decision-making framework that facilitates comparative analysis of multiple real-world decision alternatives with unique complex attributes. Utility analysis as a measure of effectiveness has been minimally used by educational researchers to date, despite clear relevance in complex…
Descriptors: Urban Universities, Suburban Schools, Nontraditional Students, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Christopher J. Johnstone; Anne Hayes; Elisheva Cohen; Hayley Niad; George Laryea-Adjei; Kathleen Letshabo; Adrian Shikwe; Augustine Agu – American Journal of Evaluation, 2024
This article reports on ways in which United Nations human rights treaties can be used as a normative framework for evaluating program outcomes. In this article, we conceptualize a human rights-based approach to program evaluation and locate this approach within the broader evaluation literature. The article describes how a rights-based framework…
Descriptors: Civil Rights, Inclusion, Program Evaluation, International Organizations
Peer reviewed Peer reviewed
Direct linkDirect link
Jessica Lauren Perez; Anaid Yerena – American Journal of Evaluation, 2024
In the United States, in 2013, 610,042 people were estimated homeless in one night. Improving the effectiveness of homeless assistance programs, particularly aligning programs' practices with their goals, is critical to serving this population. Using a theory that predicts homeless exits, this study presents an innovative, low-cost evaluation tool…
Descriptors: Homeless People, Housing Needs, Program Effectiveness, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Montrosse-Moorhead, Bianca; Gambino, Anthony J.; Yahn, Laura M.; Fan, Mindy; Vo, Anne T. – American Journal of Evaluation, 2022
A budding area of research is devoted to studying evaluator curriculum, yet to date, it has focused exclusively on describing the content and emphasis of topics or competencies in university-based programs. This study aims to expand the foci of research efforts and investigates the extent to which evaluators agree on what competencies should guide…
Descriptors: Masters Programs, Doctoral Programs, Competence, Competency Based Education
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Stephen H.; Stapleton, David C.; Wood, Michelle; Gubits, Daniel – American Journal of Evaluation, 2023
A randomized experiment that measures the impact of a social policy in a sample of the population reveals whether the policy will work on average with universal application. An experiment that includes only the subset of the population that volunteers for the intervention generates narrower "proof-of-concept" evidence of whether the…
Descriptors: Public Policy, Policy Formation, Federal Programs, Social Services
Peer reviewed Peer reviewed
Direct linkDirect link
Nolt, Kate L.; Leviton, Laura C. – American Journal of Evaluation, 2023
Evidence-based programs and grassroots programs are often adapted during implementation. Adaptations are often hidden, ignored, or punished. Although some adaptations stem from lack of organizational capacity, evaluators report other adaptations happen in good faith or are efforts to better fit the local context. Program implementers, facilitators…
Descriptors: Fidelity, Programming, Program Implementation, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Meyer, Marisol L.; Louder, Ceewin N.; Nicolas, Guerda – American Journal of Evaluation, 2022
Intervention scientists have used program theory-driven evaluation to design, implement, and assess the success of intervention programs for decades. However, interventions often are designed without the input of the community for which they are intended. The lack of incorporation of community members' voices that participate in various…
Descriptors: Change, Intervention, Community Involvement, Models
Robert Shand; Stephen M. Leach; Fiona M. Hollands; Florence Chang; Yilin Pan; Bo Yan; Dena Dossett; Samreen Nayyer-Qureshi; Yixin Wang; Laura Head – American Journal of Evaluation, 2022
We assessed whether an adaptation of value-added analysis (VAA) can provide evidence on the relative effectiveness of interventions implemented in a large school district. We analyzed two datasets, one documenting interventions received by underperforming students, and one documenting interventions received by students in schools benefiting from…
Descriptors: Value Added Models, Data Analysis, Program Evaluation, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Rebecca M. Teasdale; Cherie M. Avent; Ceily L. Moore; María B. Serrano Abreu; Xinru Yan – American Journal of Evaluation, 2025
Evaluators must attend to the destructive forces of racialization and racism to contribute to social transformation. Thus, evaluators are called to center culture, context, equity, and social justice during each step of the evaluation process. Here, we focus on the step(s) in which evaluators define program quality and specify evaluative lines of…
Descriptors: Racism, Evaluation Criteria, Social Justice, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Ralph Renger; Elias Samuels; Jessica Renger; Ellen Champagne – American Journal of Evaluation, 2025
This article presents the Renger System Test (RST) as a method for assessing whether a system evaluation approach is suitable for evaluating complex interventions. The RST has three criteria: (1) the intervention includes multiple components, (2) these components operate interdependently, and (3) their interdependence produces an outcome that no…
Descriptors: Program Evaluation, Evaluation Methods, Evaluation Criteria, Systems Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Casey, Erin A.; Vanslyke, Jan; Beadnell, Blair; Tatiana Masters, N.; McFarland, Kirstin – American Journal of Evaluation, 2023
Principles focused evaluation (PFE) can complement existing formative and outcome evaluation plans by identifying Effectiveness Principles (EPs), an operationalization of values and standards that guide practitioners during program implementation. To date, however, few examples of PFE are available in the literature. This description of the…
Descriptors: Rape, Prevention, Sexual Abuse, Evaluation Methods
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  35