NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jason A. Schoeneberger; Christopher Rhoads – American Journal of Evaluation, 2025
Regression discontinuity (RD) designs are increasingly used for causal evaluations. However, the literature contains little guidance for conducting a moderation analysis within an RDD context. The current article focuses on moderation with a single binary variable. A simulation study compares: (1) different bandwidth selectors and (2) local…
Descriptors: Regression (Statistics), Causal Models, Evaluation Methods, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Corrado Matta; Jannika Lindvall; Andreas Ryve – American Journal of Evaluation, 2024
In this article, we discuss the methodological implications of data and theory integration for Theory-Based Evaluation (TBE). TBE is a family of approaches to program evaluation that use program theories as instruments to answer questions about whether, how, and why a program works. Some of the groundwork about TBE has expressed the idea that a…
Descriptors: Data Analysis, Theories, Program Evaluation, Information Management
Peer reviewed Peer reviewed
Direct linkDirect link
Reichardt, Charles S. – American Journal of Evaluation, 2022
Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between…
Descriptors: Program Evaluation, Definitions, Causal Models, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Douthwaite, Boru; Proietti, Claudio; Polar, Vivian; Thiele, Graham – American Journal of Evaluation, 2023
This paper develops a novel approach called Outcome Trajectory Evaluation (OTE) in response to the long-causal-chain problem confronting the evaluation of research for development (R4D) projects. OTE strives to tackle four issues resulting from the common practice of evaluating R4D projects based on theory of change developed at the start. The…
Descriptors: Research and Development, Change, Program Evaluation, Social Sciences
Peer reviewed Peer reviewed
Direct linkDirect link
Lemire, Colombe; Rousseau, Michel; Dionne, Carmen – American Journal of Evaluation, 2023
Implementation fidelity is the degree of compliance with which the core elements of program or intervention practices are used as intended. The scientific literature reveals gaps in defining and assessing implementation fidelity in early intervention: lack of common definitions and conceptual framework as well as their lack of application. Through…
Descriptors: Early Intervention, Fidelity, Program Implementation, Compliance (Legal)
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew P. Jaciw – American Journal of Evaluation, 2025
By design, randomized experiments (XPs) rule out bias from confounded selection of participants into conditions. Quasi-experiments (QEs) are often considered second-best because they do not share this benefit. However, when results from XPs are used to generalize causal impacts, the benefit from unconfounded selection into conditions may be offset…
Descriptors: Elementary School Students, Elementary School Teachers, Generalization, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Gates, Emily; Dyson, Lisa – American Journal of Evaluation, 2017
Making causal claims is central to evaluation practice because we want to know the effects of a program, project, or policy. In the past decade, the conversation about establishing causal claims has become prominent (and problematic). In response to this changing conversation about causality, we argue that evaluators need to take up some new ways…
Descriptors: Evaluation Criteria, Evaluation Methods, Educational Practices, Educational Theories
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Keele, Luke – American Journal of Evaluation, 2015
In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…
Descriptors: Mediation Theory, Causal Models, Inferences, Path Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Campbell, Bernadette; Mark, Melvin M. – American Journal of Evaluation, 2015
Evaluation theories can be tested in various ways. One approach, the experimental analogue study, is described and illustrated in this article. The approach is presented as a method worthy to use in the pursuit of what Alkin and others have called descriptive evaluation theory. Drawing on analogue studies conducted by the first author, we…
Descriptors: Evaluation Research, Research Methodology, Stakeholders, Theories
Peer reviewed Peer reviewed
Direct linkDirect link
Dong, Nianbo – American Journal of Evaluation, 2015
Researchers have become increasingly interested in programs' main and interaction effects of two variables (A and B, e.g., two treatment variables or one treatment variable and one moderator) on outcomes. A challenge for estimating main and interaction effects is to eliminate selection bias across A-by-B groups. I introduce Rubin's causal model to…
Descriptors: Probability, Statistical Analysis, Research Design, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Cook, Thomas D.; Scriven, Michael; Coryn, Chris L. S.; Evergreen, Stephanie D. H. – American Journal of Evaluation, 2010
Legitimate knowledge claims about causation have been a central concern among evaluators and applied researchers for several decades and often have been the subject of heated debates. In recent years these debates have resurfaced with a renewed intensity, due in part to the priority currently being given to randomized experiments by many funders…
Descriptors: Evaluators, Research Design, Causal Models, Inferences
Peer reviewed Peer reviewed
Mohr, Lawrence B. – American Journal of Evaluation, 1999
Discusses qualitative methods of impact analysis and provides an introductory treatment of one such approach. Combines an awareness of an alternative causal epistemology with current knowledge of qualitative methods of data collection and measurement to produce an approach to the analysis of impacts. (SLD)
Descriptors: Causal Models, Data Collection, Epistemology, Measurement Techniques
Peer reviewed Peer reviewed
House, Ernest R. – American Journal of Evaluation, 2001
Explores two issues that have strongly influenced much of what has happened in evaluation in recent decades. The quantitative-qualitative debate has been fueled by changes in theories of causation. The second issue, that of the fact-value dichotomy, can be dealt with through the realization that facts and values are not separate kinds of entities,…
Descriptors: Causal Models, Evaluation Methods, Evaluation Utilization, Futures (of Society)
Previous Page | Next Page ยป
Pages: 1  |  2