NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 11 results Save | Export
K. L. Anglin; A. Krishnamachari; V. Wong – Grantee Submission, 2020
This article reviews important statistical methods for estimating the impact of interventions on outcomes in education settings, particularly programs that are implemented in field, rather than laboratory, settings. We begin by describing the causal inference challenge for evaluating program effects. Then four research designs are discussed that…
Descriptors: Causal Models, Statistical Inference, Intervention, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Kraft, Matthew A. – Educational Researcher, 2020
Researchers commonly interpret effect sizes by applying benchmarks proposed by Jacob Cohen over a half century ago. However, effects that are small by Cohen's standards are large relative to the impacts of most field-based interventions. These benchmarks also fail to consider important differences in study features, program costs, and scalability.…
Descriptors: Effect Size, Benchmarking, Educational Research, Intervention
Lo-Hua Yuan; Avi Feller; Luke W. Miratrix – Grantee Submission, 2019
Randomized trials are often conducted with separate randomizations across multiple sites such as schools, voting districts, or hospitals. These sites can differ in important ways, including the site's implementation, local conditions, and the composition of individuals. An important question in practice is whether--and under what…
Descriptors: Causal Models, Intervention, High School Students, College Attendance
Peer reviewed Peer reviewed
Direct linkDirect link
Harvill, Eleanor L.; Peck, Laura R.; Bell, Stephen H. – American Journal of Evaluation, 2013
Using exogenous characteristics to identify endogenous subgroups, the approach discussed in this method note creates symmetric subsets within treatment and control groups, allowing the analysis to take advantage of an experimental design. In order to maintain treatment--control symmetry, however, prior work has posited that it is necessary to use…
Descriptors: Experimental Groups, Control Groups, Research Design, Sampling
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Nowak, Christoph; Heinrichs, Nina – Clinical Child and Family Psychology Review, 2008
A meta-analysis encompassing all studies evaluating the impact of the Triple P-Positive Parenting Program on parent and child outcome measures was conducted in an effort to identify variables that moderate the program's effectiveness. Hierarchical linear models (HLM) with three levels of data were employed to analyze effect sizes. The results (N =…
Descriptors: Intervention, Parent Education, Child Rearing, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Stuart, Elizabeth A. – Educational Researcher, 2007
Education researchers, practitioners, and policymakers alike are committed to identifying interventions that teach students more effectively. Increased emphasis on evaluation and accountability has increased desire for sound evaluations of these interventions; and at the same time, school-level data have become increasingly available. This article…
Descriptors: Research Methodology, Computation, Causal Models, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Schweigert, Francis J. – American Journal of Evaluation, 2006
In the present climate of public accountability, there is increasing demand to show "what works" and what return is gained for the public from investments to improve communities. This increasing demand for accountability is being met with growing confidence in the field of philanthropy during the past 10 years that the impact or effectiveness of…
Descriptors: Evaluation Methods, Private Financial Support, Accountability, Philanthropic Foundations
Peer reviewed Peer reviewed
MacKinnon, David P.; Dwyer, James H. – Evaluation Review, 1993
Statistical approaches to assess how prevention and intervention programs achieve their effects are described and illustrated through the evaluation of a health promotion program to reduce dietary cholesterol and a school-based drug prevention program. Analyses require the measurement of intervening or mediating variables to represent the…
Descriptors: Causal Models, Disease Control, Drug Use, Equations (Mathematics)
Randolph, Justus J.; Eronen, Pasi J. – Online Submission, 2004
Background: Although there are many high-quality models for program and evaluation planning, these models are often too intensive to be used in situations when time and resources are scarce. Additionally, there is little added value in using an elaborate and expensive program and evaluation planning procedure when programs are small or are planned…
Descriptors: Sexually Transmitted Diseases, Prevention, Communicable Diseases, Logical Thinking