NotesFAQContact Us
Collection
Advanced
Search Tips
Source
American Journal of Evaluation30
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 30 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
de Alteriis, Martin – American Journal of Evaluation, 2020
This article examines factors that could have influenced whether evaluations of U.S. government--funded foreign assistance programs completed in 2015 had considered unintended consequences. Logit regression models indicate that the odds of considering unintended consequences were increased when all or most of seven standard data collection methods…
Descriptors: Federal Programs, International Programs, Program Evaluation, Influences
Peer reviewed Peer reviewed
Direct linkDirect link
Pattyn, Valérie; Molenveld, Astrid; Befani, Barbara – American Journal of Evaluation, 2019
Qualitative comparative analysis (QCA) is gaining ground in evaluation circles, but the number of applications is still limited. In this article, we consider the challenges that can emerge during a QCA evaluation by drawing on our experience of conducting one in the field of development cooperation. For each stage of the evaluation process, we…
Descriptors: Qualitative Research, Comparative Analysis, Evaluation Methods, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Finucane, Mariel McKenzie; Martinez, Ignacio; Cody, Scott – American Journal of Evaluation, 2018
In the coming years, public programs will capture even more and richer data than they do now, including data from web-based tools used by participants in employment services, from tablet-based educational curricula, and from electronic health records for Medicaid beneficiaries. Program evaluators seeking to take full advantage of these data…
Descriptors: Bayesian Statistics, Data Analysis, Program Evaluation, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Stelmach, Rachel D.; Fitch, Elizabeth; Chen, Molly; Meekins, Meagan; Flueckiger, Rebecca M.; Colaço, Rajeev – American Journal of Evaluation, 2022
Monitoring, evaluation, and research activities generate important data, but they often fail to change policies or programs. In addition, local program staff and partners often feel disconnected from these activities, which undermines their ownership of data and results. To bridge the gaps between monitoring, evaluation, and research and to give…
Descriptors: Evidence Based Practice, Evaluation, Research, Global Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Morell, Jonathan A. – American Journal of Evaluation, 2019
Project schedules are logic models that focus on the timing of program activities. Value derives from the fact that schedule changes are not random. Why they occur, and how long they last, can reveal information that would not be easily revealed with other approaches to evaluation. Also, using project schedules as logic models forges a strong link…
Descriptors: Scheduling, Program Administration, Models, Logical Thinking
Peer reviewed Peer reviewed
Direct linkDirect link
Cueva, Katie; Fenaughty, Andrea; Liendo, Jessica Aulasa; Hyde-Rolland, Samantha – American Journal of Evaluation, 2020
Chronic diseases with behavioral risk factors are now the leading causes of death in the United States. A national Behavioral Risk Factor Surveillance System (BRFSS) monitors those risk factors; however, there is a need for national and state evaluations of chronic disease surveillance systems. The Department of Health and Human Services/Centers…
Descriptors: Chronic Illness, At Risk Persons, Program Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Groth Andersson, Signe; Denvall, Verner – American Journal of Evaluation, 2017
In recent years, performance management (PM) has become a buzzword in public sector organizations. Well-functioning PM systems rely on valid performance data, but critics point out that conflicting rationale or logic among professional staff in recording information can undermine the quality of the data. Based on a case study of social service…
Descriptors: Performance, Social Services, Case Studies, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Brandon, Paul R.; Fukunaga, Landry L. – American Journal of Evaluation, 2014
Evaluators widely agree that stakeholder involvement is a central aspect of effective program evaluation. With the exception of articles on collaborative evaluation approaches, however, a systematic review of the breadth and depth of the literature on stakeholder involvement has not been published. In this study, we examine peer-reviewed empirical…
Descriptors: Stakeholders, Research, Data Collection, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Granger, Robert C.; Maynard, Rebecca – American Journal of Evaluation, 2015
Despite bipartisan support in Washington, DC, which dates back to the mid-1990s, the "what works" approach has yet to gain broad support among policymakers and practitioners. One way to build such support is to increase the usefulness of program impact evaluations for these groups. We describe three ways to make impact evaluations more…
Descriptors: Outcome Measures, Program Evaluation, Evaluation Utilization, Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Klerman, Jacob Alex; Olsho, Lauren E. W.; Bartlett, Susan – American Journal of Evaluation, 2015
While regression discontinuity has usually been applied retrospectively to secondary data, it is even more attractive when applied prospectively. In a prospective design, data collection can be focused on cases near the discontinuity, thereby improving internal validity and substantially increasing precision. Furthermore, such prospective…
Descriptors: Regression (Statistics), Evaluation Methods, Evaluation Problems, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Koleros, Andrew; Jupp, Dee; Kirwan, Sean; Pradhan, Meeta S.; Pradhan, Pushkar K.; Seddon, David; Tumbahangfe, Ansu – American Journal of Evaluation, 2016
This article presents discussion and recommendations on approaches to retrospectively evaluating development interventions in the long term through a systems lens. It is based on experiences from the implementation of an 18-month study to investigate the impact of development interventions on economic and social change over a 40-year period in the…
Descriptors: Foreign Countries, Case Studies, Systems Development, International Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Hawk, Mary – American Journal of Evaluation, 2015
Randomized controlled trials are the gold standard in research but may not fully explain or predict outcome variations in community-based interventions. Demonstrating efficacy of externally driven programs in well-controlled environments may not translate to community-based implementation where resources and priorities vary. A bottom-up evaluation…
Descriptors: African Americans, Females, Acquired Immunodeficiency Syndrome (AIDS), Risk Management
Peer reviewed Peer reviewed
Direct linkDirect link
Wharton, Tracy; Alexander, Neil – American Journal of Evaluation, 2013
This article describes lessons learned about implementing evaluations in hospital settings. In order to overcome the methodological dilemmas inherent in this environment, we used a practical participatory evaluation (P-PE) strategy to engage as many stakeholders as possible in the process of evaluating a clinical demonstration project.…
Descriptors: Hospitals, Demonstration Programs, Program Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Gee, Kevin A. – American Journal of Evaluation, 2014
The growth in the availability of longitudinal data--data collected over time on the same individuals--as part of program evaluations has opened up exciting possibilities for evaluators to ask more nuanced questions about how individuals' outcomes change over time. However, in order to leverage longitudinal data to glean these important insights,…
Descriptors: Longitudinal Studies, Data Analysis, Statistical Studies, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Weitzman, Beth C.; Silver, Diana – American Journal of Evaluation, 2013
In this commentary, we examine Braverman's insights into the trade-offs between feasibility and rigor in evaluation measures and reject his assessment of the trade-off as a zero-sum game. We, argue that feasibility and policy salience are, like reliability and validity, intrinsic to the definition of a good measure. To reduce the tension between…
Descriptors: Program Evaluation, Measures (Individuals), Evaluation Methods, Measurement
Previous Page | Next Page »
Pages: 1  |  2