NotesFAQContact Us
Collection
Advanced
Search Tips
Source
American Journal of Evaluation44
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
Assessments and Surveys
Woodcock Reading Mastery Test1
What Works Clearinghouse Rating
Showing 1 to 15 of 44 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Meyer, Marisol L.; Louder, Ceewin N.; Nicolas, Guerda – American Journal of Evaluation, 2022
Intervention scientists have used program theory-driven evaluation to design, implement, and assess the success of intervention programs for decades. However, interventions often are designed without the input of the community for which they are intended. The lack of incorporation of community members' voices that participate in various…
Descriptors: Change, Intervention, Community Involvement, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Stephen H.; Stapleton, David C.; Wood, Michelle; Gubits, Daniel – American Journal of Evaluation, 2023
A randomized experiment that measures the impact of a social policy in a sample of the population reveals whether the policy will work on average with universal application. An experiment that includes only the subset of the population that volunteers for the intervention generates narrower "proof-of-concept" evidence of whether the…
Descriptors: Public Policy, Policy Formation, Federal Programs, Social Services
Peer reviewed Peer reviewed
Direct linkDirect link
Bower, Kyle L. – American Journal of Evaluation, 2022
The purpose of this paper is to introduce the Five-Level Qualitative Data Analysis (5LQDA) method for ATLAS.ti as a way to intentionally design methodological approaches applicable to the field of evaluation. To demonstrate my analytical process using ATLAS.ti, I use examples from an existing evaluation of a STEM Peer Learning Assistant program.…
Descriptors: Qualitative Research, Data Analysis, Program Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Macy, Rebecca J.; Eckhardt, Amanda; Wretman, Christopher J.; Hu, Ran; Kim, Jeongsuk; Wang, Xinyi; Bombeeck, Cindy – American Journal of Evaluation, 2022
The increasing number of anti-trafficking organizations and funding for anti-trafficking services have greatly out-paced evaluative efforts resulting in critical knowledge gaps, which have been underscored by recent recommendations for the development of greater evaluation capacity in the anti-trafficking field. In response to these calls, this…
Descriptors: Evaluation Methods, Slavery, Antisocial Behavior, Crime
Peer reviewed Peer reviewed
Direct linkDirect link
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Peer reviewed Peer reviewed
Direct linkDirect link
Patton, Michael Quinn – American Journal of Evaluation, 2015
Our understanding of programs is enhanced when trained, skilled, and observant evaluators go "into the field"--the real world where programs are conducted--paying attention to what's going on, systematically documenting what they see, and reporting what they learn. The article opens by presenting and illustrating twelve reasons for…
Descriptors: Program Evaluation, Evaluation Methods, Design Requirements, Field Studies
Peer reviewed Peer reviewed
Direct linkDirect link
DeBarger, Angela Haydel; Penuel, William R.; Harris, Christopher J.; Kennedy, Cathleen A. – American Journal of Evaluation, 2016
Evaluators must employ research designs that generate compelling evidence related to the worth or value of programs, of which assessment data often play a critical role. This article focuses on assessment design in the context of evaluation. It describes the process of using the Framework for K-12 Science Education and Next Generation Science…
Descriptors: Intervention, Program Evaluation, Research Design, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Moulton, Shawn R.; Peck, Laura R.; Greeney, Adam – American Journal of Evaluation, 2018
In experimental evaluations of health and social programs, the role of dosage is rarely explored because researchers cannot usually randomize individuals to experience varying dosage levels. Instead, such evaluations reveal the average effects of exposure to an intervention, although program exposure may vary widely. This article compares three…
Descriptors: Marriage, Intervention, Prediction, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Zandniapour, Lily; Deterding, Nicole M. – American Journal of Evaluation, 2018
Tiered evidence initiatives are an important federal strategy to incentivize and accelerate the use of rigorous evidence in planning, implementing, and assessing social service investments. The Social Innovation Fund (SIF), a program of the Corporation for National and Community Service, adopted a public-private partnership approach to tiered…
Descriptors: Program Effectiveness, Program Evaluation, Research Needs, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Louie, Josephine; Rhoads, Christopher; Mark, June – American Journal of Evaluation, 2016
Interest in the regression discontinuity (RD) design as an alternative to randomized control trials (RCTs) has grown in recent years. There is little practical guidance, however, on conditions that would lead to a successful RD evaluation or the utility of studies with underpowered RD designs. This article describes the use of RD design to…
Descriptors: Regression (Statistics), Program Evaluation, Algebra, Supplementary Education
Peer reviewed Peer reviewed
Direct linkDirect link
McAlindon, Kathryn; Neal, Jennifer Watling; Neal, Zachary P.; Mills, Kristen J.; Lawlor, Jennifer – American Journal of Evaluation, 2019
Despite growing interest in data visualization and graphically aided reporting, the evaluation literature could benefit from additional guidance on systematically integrating visual communication design and marketing into comprehensive communication strategies to improve data dissemination. This article describes the role of targeted communication…
Descriptors: Visual Aids, Marketing, Graphic Arts, Technical Writing
Peer reviewed Peer reviewed
Direct linkDirect link
Solmeyer, Anna R.; Constance, Nicole – American Journal of Evaluation, 2015
Traditionally, evaluation has primarily tried to answer the question "Does a program, service, or policy work?" Recently, more attention is given to questions about variation in program effects and the mechanisms through which program effects occur. Addressing these kinds of questions requires moving beyond assessing average program…
Descriptors: Program Effectiveness, Program Evaluation, Program Content, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Ahlin, Eileen M. – American Journal of Evaluation, 2015
Evaluation research conducted in agencies that sanction law violators is often challenging and due process may preclude evaluators from using experimental methods in traditional criminal justice agencies such as police, courts, and corrections. However, administrative agencies often deal with the same population but are not bound by due process…
Descriptors: Research Methodology, Evaluation Research, Criminals, Correctional Institutions
Peer reviewed Peer reviewed
Direct linkDirect link
Vanderkruik, Rachel; McPherson, Marianne E. – American Journal of Evaluation, 2017
Evaluating initiatives implemented across multiple settings can elucidate how various contextual factors may influence both implementation and outcomes. Understanding context is especially critical when the same program has varying levels of success across settings. We present a framework for evaluating contextual factors affecting an initiative…
Descriptors: Public Health, Sustainability, Program Implementation, Context Effect
Previous Page | Next Page ยป
Pages: 1  |  2  |  3