Publication Date
In 2025 | 2 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 14 |
Descriptor
Causal Models | 16 |
Evaluation Methods | 8 |
Program Evaluation | 6 |
Research Design | 5 |
Inferences | 4 |
Theories | 4 |
Program Effectiveness | 3 |
Quasiexperimental Design | 3 |
Research Methodology | 3 |
Accountability | 2 |
Comparative Analysis | 2 |
More ▼ |
Source
American Journal of Evaluation | 16 |
Author
Andreas Ryve | 1 |
Andrew P. Jaciw | 1 |
Bell, Stephen H. | 1 |
Bello-Gomez, Ricardo A. | 1 |
Campbell, Bernadette | 1 |
Christopher Rhoads | 1 |
Cook, Thomas D. | 1 |
Corrado Matta | 1 |
Coryn, Chris L. S. | 1 |
Dionne, Carmen | 1 |
Dong, Nianbo | 1 |
More ▼ |
Publication Type
Journal Articles | 16 |
Reports - Descriptive | 8 |
Reports - Research | 4 |
Reports - Evaluative | 3 |
Information Analyses | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 1 |
Audience
Location
Tennessee | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Environment… | 1 |
Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
Jason A. Schoeneberger; Christopher Rhoads – American Journal of Evaluation, 2025
Regression discontinuity (RD) designs are increasingly used for causal evaluations. However, the literature contains little guidance for conducting a moderation analysis within an RDD context. The current article focuses on moderation with a single binary variable. A simulation study compares: (1) different bandwidth selectors and (2) local…
Descriptors: Regression (Statistics), Causal Models, Evaluation Methods, Multivariate Analysis
Corrado Matta; Jannika Lindvall; Andreas Ryve – American Journal of Evaluation, 2024
In this article, we discuss the methodological implications of data and theory integration for Theory-Based Evaluation (TBE). TBE is a family of approaches to program evaluation that use program theories as instruments to answer questions about whether, how, and why a program works. Some of the groundwork about TBE has expressed the idea that a…
Descriptors: Data Analysis, Theories, Program Evaluation, Information Management
Reichardt, Charles S. – American Journal of Evaluation, 2022
Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between…
Descriptors: Program Evaluation, Definitions, Causal Models, Evaluation Methods
Douthwaite, Boru; Proietti, Claudio; Polar, Vivian; Thiele, Graham – American Journal of Evaluation, 2023
This paper develops a novel approach called Outcome Trajectory Evaluation (OTE) in response to the long-causal-chain problem confronting the evaluation of research for development (R4D) projects. OTE strives to tackle four issues resulting from the common practice of evaluating R4D projects based on theory of change developed at the start. The…
Descriptors: Research and Development, Change, Program Evaluation, Social Sciences
Lemire, Colombe; Rousseau, Michel; Dionne, Carmen – American Journal of Evaluation, 2023
Implementation fidelity is the degree of compliance with which the core elements of program or intervention practices are used as intended. The scientific literature reveals gaps in defining and assessing implementation fidelity in early intervention: lack of common definitions and conceptual framework as well as their lack of application. Through…
Descriptors: Early Intervention, Fidelity, Program Implementation, Compliance (Legal)
Andrew P. Jaciw – American Journal of Evaluation, 2025
By design, randomized experiments (XPs) rule out bias from confounded selection of participants into conditions. Quasi-experiments (QEs) are often considered second-best because they do not share this benefit. However, when results from XPs are used to generalize causal impacts, the benefit from unconfounded selection into conditions may be offset…
Descriptors: Elementary School Students, Elementary School Teachers, Generalization, Test Bias
Gates, Emily; Dyson, Lisa – American Journal of Evaluation, 2017
Making causal claims is central to evaluation practice because we want to know the effects of a program, project, or policy. In the past decade, the conversation about establishing causal claims has become prominent (and problematic). In response to this changing conversation about causality, we argue that evaluators need to take up some new ways…
Descriptors: Evaluation Criteria, Evaluation Methods, Educational Practices, Educational Theories
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Keele, Luke – American Journal of Evaluation, 2015
In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…
Descriptors: Mediation Theory, Causal Models, Inferences, Path Analysis
Campbell, Bernadette; Mark, Melvin M. – American Journal of Evaluation, 2015
Evaluation theories can be tested in various ways. One approach, the experimental analogue study, is described and illustrated in this article. The approach is presented as a method worthy to use in the pursuit of what Alkin and others have called descriptive evaluation theory. Drawing on analogue studies conducted by the first author, we…
Descriptors: Evaluation Research, Research Methodology, Stakeholders, Theories
Dong, Nianbo – American Journal of Evaluation, 2015
Researchers have become increasingly interested in programs' main and interaction effects of two variables (A and B, e.g., two treatment variables or one treatment variable and one moderator) on outcomes. A challenge for estimating main and interaction effects is to eliminate selection bias across A-by-B groups. I introduce Rubin's causal model to…
Descriptors: Probability, Statistical Analysis, Research Design, Causal Models
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling
Cook, Thomas D.; Scriven, Michael; Coryn, Chris L. S.; Evergreen, Stephanie D. H. – American Journal of Evaluation, 2010
Legitimate knowledge claims about causation have been a central concern among evaluators and applied researchers for several decades and often have been the subject of heated debates. In recent years these debates have resurfaced with a renewed intensity, due in part to the priority currently being given to randomized experiments by many funders…
Descriptors: Evaluators, Research Design, Causal Models, Inferences

Mohr, Lawrence B. – American Journal of Evaluation, 1999
Discusses qualitative methods of impact analysis and provides an introductory treatment of one such approach. Combines an awareness of an alternative causal epistemology with current knowledge of qualitative methods of data collection and measurement to produce an approach to the analysis of impacts. (SLD)
Descriptors: Causal Models, Data Collection, Epistemology, Measurement Techniques

House, Ernest R. – American Journal of Evaluation, 2001
Explores two issues that have strongly influenced much of what has happened in evaluation in recent decades. The quantitative-qualitative debate has been fueled by changes in theories of causation. The second issue, that of the fact-value dichotomy, can be dealt with through the realization that facts and values are not separate kinds of entities,…
Descriptors: Causal Models, Evaluation Methods, Evaluation Utilization, Futures (of Society)
Previous Page | Next Page ยป
Pages: 1 | 2