NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20243
Since 2021 (last 5 years)7
Since 2016 (last 10 years)13
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Anglin; Qing Liu; Vivian C. Wong – Asia Pacific Education Review, 2024
Given decision-makers often prioritize causal research that identifies the impact of treatments on the people they serve, a key question in education research is, "Does it work?". Today, however, researchers are paying increasing attention to successive questions that are equally important from a practical standpoint--not only does it…
Descriptors: Educational Research, Program Evaluation, Validity, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Corrado Matta; Jannika Lindvall; Andreas Ryve – American Journal of Evaluation, 2024
In this article, we discuss the methodological implications of data and theory integration for Theory-Based Evaluation (TBE). TBE is a family of approaches to program evaluation that use program theories as instruments to answer questions about whether, how, and why a program works. Some of the groundwork about TBE has expressed the idea that a…
Descriptors: Data Analysis, Theories, Program Evaluation, Information Management
Peer reviewed Peer reviewed
Direct linkDirect link
Reichardt, Charles S. – American Journal of Evaluation, 2022
Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between…
Descriptors: Program Evaluation, Definitions, Causal Models, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Bri'Ann F. Wright – Arts Education Policy Review, 2024
The purpose of this study was to conduct an evaluation of the pilot program of the Turnaround Arts reform using a comparative interrupted time series design. Because the only existing evaluation of the Turnaround Arts pilot program lacks clarity and transparency, reanalyzing this program is important to understand the effects of the initiative. I…
Descriptors: Art Education, Music Education, Program Evaluation, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Douthwaite, Boru; Proietti, Claudio; Polar, Vivian; Thiele, Graham – American Journal of Evaluation, 2023
This paper develops a novel approach called Outcome Trajectory Evaluation (OTE) in response to the long-causal-chain problem confronting the evaluation of research for development (R4D) projects. OTE strives to tackle four issues resulting from the common practice of evaluating R4D projects based on theory of change developed at the start. The…
Descriptors: Research and Development, Change, Program Evaluation, Social Sciences
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Jonathan E.; Sondermeyer, Elizabeth – Adult Learning, 2023
Over 2000 years ago, Aristotle wrote of four distinct causes at play in the world we know. Those causes, the material cause, the formal cause, the efficient cause, and the final cause, were meant to refer to ontological and, by extension, epistemological concerns, and were powerful enough to be seized upon and used in some form by those of very…
Descriptors: Philosophy, Causal Models, Evaluation Methods, Program Evaluation
K. L. Anglin; A. Krishnamachari; V. Wong – Grantee Submission, 2020
This article reviews important statistical methods for estimating the impact of interventions on outcomes in education settings, particularly programs that are implemented in field, rather than laboratory, settings. We begin by describing the causal inference challenge for evaluating program effects. Then four research designs are discussed that…
Descriptors: Causal Models, Statistical Inference, Intervention, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Kraft, Matthew A. – Educational Researcher, 2020
Researchers commonly interpret effect sizes by applying benchmarks proposed by Jacob Cohen over a half century ago. However, effects that are small by Cohen's standards are large relative to the impacts of most field-based interventions. These benchmarks also fail to consider important differences in study features, program costs, and scalability.…
Descriptors: Effect Size, Benchmarking, Educational Research, Intervention
Lo-Hua Yuan; Avi Feller; Luke W. Miratrix – Grantee Submission, 2019
Randomized trials are often conducted with separate randomizations across multiple sites such as schools, voting districts, or hospitals. These sites can differ in important ways, including the site's implementation, local conditions, and the composition of individuals. An important question in practice is whether--and under what…
Descriptors: Causal Models, Intervention, High School Students, College Attendance
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Pitas, Nicholas; Murray, Alison; Olsen, Max; Graefe, Alan – Journal of Extension, 2017
This article describes a modified importance-performance framework for use in evaluation of recreation-based experiential learning programs. Importance-performance analysis (IPA) provides an effective and readily applicable means of evaluating many programs, but the near universal satisfaction associated with recreation inhibits the use of IPA in…
Descriptors: Experiential Learning, Recreational Programs, Program Evaluation, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Khampirat, Buratin; McRae, Norah – Asia-Pacific Journal of Cooperative Education, 2016
Cooperative and Work-integrated Education (CWIE) programs have been widely accepted as educational programs that can effectively connect what students are learning to the world of work through placements. Because a global quality standards framework could be a very valuable resource and guide to establishing, developing, and accrediting quality…
Descriptors: Foreign Countries, Cooperative Learning, Work Experience Programs, Education Work Relationship