NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Reichardt, Charles S. – American Journal of Evaluation, 2022
Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between…
Descriptors: Program Evaluation, Definitions, Causal Models, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew P. Jaciw – American Journal of Evaluation, 2025
By design, randomized experiments (XPs) rule out bias from confounded selection of participants into conditions. Quasi-experiments (QEs) are often considered second-best because they do not share this benefit. However, when results from XPs are used to generalize causal impacts, the benefit from unconfounded selection into conditions may be offset…
Descriptors: Elementary School Students, Elementary School Teachers, Generalization, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Ledford, Jennifer R. – American Journal of Evaluation, 2018
Randomization of large number of participants to different treatment groups is often not a feasible or preferable way to answer questions of immediate interest to professional practice. Single case designs (SCDs) are a class of research designs that are experimental in nature but require only a few participants, all of whom receive the…
Descriptors: Research Design, Randomized Controlled Trials, Experimental Groups, Control Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Mark, Melvin M.; Caracelli, Valerie; McNall, Miles A.; Miller, Robin Lin – American Journal of Evaluation, 2018
Since 2003, the Oral History Project Team has conducted interviews with individuals who have made particularly noteworthy contributions to the theory and practice of evaluation. In 2013, Mel Mark, Valerie Caracelli, and Miles McNall sat with Thomas Cook in Washington, D.C., during the American Evaluation Association (AEA) annual conference. The…
Descriptors: Biographies, Oral History, College Faculty, Faculty Development
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Cooksy, Leslie J.; Mark, Melvin M. – American Journal of Evaluation, 2012
Attention to evaluation quality is commonplace, even if sometimes implicit. Drawing on her 2010 Presidential Address to the American Evaluation Association, Leslie Cooksy suggests that evaluation quality depends, at least in part, on the intersection of three factors: (a) evaluator competency, (b) aspects of the evaluation environment or context,…
Descriptors: Competence, Context Effect, Educational Resources, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Brandon, Paul R.; Taum, Alice K. H.; Young, Donald B.; Pottenger, Francis M., III; Speitel, Thomas W. – American Journal of Evaluation, 2008
In the growing literature on the evaluation of program implementation, less has been said about evaluating program quality than about evaluating other aspects of program implementation. Furthermore, most articles and reports in the program-implementation evaluation literature have presented only brief descriptions of how implementation instruments…
Descriptors: Program Implementation, Program Effectiveness, Educational Quality, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Weiss, Carol H.; Murphy-Graham, Erin; Petrosino, Anthony; Gandhi, Allison G. – American Journal of Evaluation, 2008
Evaluators sometimes wish for a Fairy Godmother who would make decision makers pay attention to evaluation findings when choosing programs to implement. The U.S. Department of Education came close to creating such a Fairy Godmother when it required school districts to choose drug abuse prevention programs only if their effectiveness was supported…
Descriptors: Evaluators, Prevention, Drug Abuse, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Frey, Bruce B.; Lohmeier, Jill H.; Lee, Stephen W.; Tollefson, Nona – American Journal of Evaluation, 2006
Collaboration is a prerequisite for the sustainability of interagency programs, particularly those programs initially created with the support of time-limited grant-funding sources. From the perspective of evaluators, however, assessing collaboration among grant partners is often difficult. It is also challenging to present collaboration data to…
Descriptors: Grants, Evaluation Methods, Reliability, Agency Cooperation
Peer reviewed Peer reviewed
Brandon, Paul R. – American Journal of Evaluation, 1998
Shows how to bridge the gap between collaborative evaluations with extensive stakeholder participation and noncollaborative evaluations in which stakeholders do not participate to a great degree. Synthesizes research that shows that interaction with stakeholders helps enhance validity in noncollaborative evaluations. (SLD)
Descriptors: Evaluation Methods, Interaction, Participation, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Mowbray, Carol T.; Holter, Mark C.; Teague, Gregory B.; Bybee, Deborah – American Journal of Evaluation, 2003
Fidelity may be defined as the extent to which delivery of an intervention adheres to the protocol or program model originally developed. Fidelity measurement has increasing significance for evaluation, treatment effectiveness research, and service administration. Yet few published studies using fidelity criteria provide details on the…
Descriptors: Program Evaluation, Evaluation Criteria, Statistical Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Mabry, Linda – American Journal of Evaluation, 2004
Data limitations severe enough to undermine the validity of findings discomfort evaluators whenever they occur. Unfortunately, such occurrences are not infrequent, faced not only by a "freshly minted graduate of a respected doctoral program in evaluation" but also by seasoned professionals (Bamberger, Rugh, & Mabry, forthcoming). Data may be…
Descriptors: Access to Information, Information Sources, Evaluators, Doctoral Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Nassif, Nader; Khalil, Yvette – American Journal of Evaluation, 2006
Teaching difficulties often require creative approaches. Difficulties are often compounded when students in class show anxiety toward the material presented and, in particular, in the case of quantitative methods--"fear of numbers." This case presented itself in an advanced course in health behavior theory, where teaching the concepts of validity…
Descriptors: Figurative Language, Measures (Individuals), Reliability, Class Activities