NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 15 results Save | Export
A. Brooks Bowden – AERA Open, 2023
Although experimental evaluations have been labeled the "gold standard" of evidence for policy (U.S. Department of Education, 2003), evaluations without an analysis of costs are not sufficient for policymaking (Monk, 1995; Ross et al., 2007). Funding organizations now require cost-effectiveness data in most evaluations of effects. Yet,…
Descriptors: Cost Effectiveness, Program Evaluation, Economics, Educational Finance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Hallberg, Kelly; Williams, Ryan; Swanlund, Andrew – Journal of Research on Educational Effectiveness, 2020
More aggregate data on school performance is available than ever before, opening up new possibilities for applied researchers interested in assessing the effectiveness of school-level interventions quickly and at a relatively low cost by implementing comparative interrupted times series (CITS) designs. We examine the extent to which effect…
Descriptors: Data Use, Research Methodology, Program Effectiveness, Design
Peer reviewed Peer reviewed
Direct linkDirect link
May, Henry; Jones, Akisha; Blakeney, Aly – AERA Online Paper Repository, 2019
Using an RD design provides statistically robust estimates while allowing researchers a different causal estimation tool to be used in educational environments where an RCT may not be feasible. Results from External Evaluation of the i3 Scale-Up of Reading Recovery show that impact estimates were remarkably similar between a randomized control…
Descriptors: Regression (Statistics), Research Design, Randomized Controlled Trials, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2016
This document provides step-by-step instructions on how to complete the Study Review Guide (SRG, Version S3, V2) for single-case designs (SCDs). Reviewers will complete an SRG for every What Works Clearinghouse (WWC) review. A completed SRG should be a reviewer's independent assessment of the study, relative to the criteria specified in the review…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies. The impact estimators are derived using the building blocks of experimental designs with minimal assumptions, and have good statistical properties. The methods apply to randomized controlled trials (RCTs) and…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kautz, Tim; Schochet, Peter Z.; Tilley, Charles – National Center for Education Evaluation and Regional Assistance, 2017
A new design-based theory has recently been developed to estimate impacts for randomized controlled trials (RCTs) and basic quasi-experimental designs (QEDs) for a wide range of designs used in social policy research (Imbens & Rubin, 2015; Schochet, 2016). These methods use the potential outcomes framework and known features of study designs…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies (Imbens and Rubin, 2015; Schochet, 2015, 2016). The estimators are derived using the building blocks of experimental designs with minimal assumptions, and are unbiased and normally distributed in large samples…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Reeves, Barnaby C.; Higgins, Julian P. T.; Ramsay, Craig; Shea, Beverley; Tugwell, Peter; Wells, George A. – Research Synthesis Methods, 2013
Background: Methods need to be further developed to include non-randomised studies (NRS) in systematic reviews of the effects of health care interventions. NRS are often required to answer questions about harms and interventions for which evidence from randomised controlled trials (RCTs) is not available. Methods used to review randomised…
Descriptors: Research Methodology, Research Design, Health Services, Workshops
Ruth Maisey; Svetlana Speight; Chris Bonell; Susan Purdon; Peter Keogh; Ivonne Wollny; Annik M. Sorhaindo; Kaye Wellings – Sage Research Methods Cases, 2014
In 2009, the government's Department for Education commissioned a team of researchers at NatCen Social Research to evaluate the effectiveness of the youth development/teenage pregnancy prevention programme 'Teens and Toddlers'. Previous studies had positive findings but had not been very rigorous in terms of methodology and methods used. We…
Descriptors: Youth Programs, Program Evaluation, Adolescents, Toddlers
Peer reviewed Peer reviewed
Direct linkDirect link
Valentine, Jeffrey C.; Thompson, Simon G. – Research Synthesis Methods, 2013
Background: Confounding caused by selection bias is often a key difference between non-randomized studies (NRS) and randomized controlled trials (RCTs) of interventions. Key methodological issues: In this third paper of the series, we consider issues relating to the inclusion of NRS in systematic reviews on the effects of interventions. We discuss…
Descriptors: Research Design, Randomized Controlled Trials, Intervention, Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bloom, Howard S.; Porter, Kristin E.; Weiss, Michael J.; Raudenbush, Stephen – Society for Research on Educational Effectiveness, 2013
To date, evaluation research and policy analysis have focused mainly on average program impacts and paid little systematic attention to their variation. Recently, the growing number of multi-site randomized trials that are being planned and conducted make it increasingly feasible to study "cross-site" variation in impacts. Important…
Descriptors: Research Methodology, Policy, Evaluation Research, Randomized Controlled Trials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lord, Pippa; Rabiasz, Adam; Roy, Palak; Harland, Jennie; Styles, Ben; Fowler, Katherine – Education Endowment Foundation, 2017
The Evidence-based Literacy Support-"Literacy Octopus" Trial tested a range of dissemination interventions and resources, all of which aimed to engage schools in using evidence-based materials to improve teaching and learning in Key Stage 2 literacy. Four delivery partners provided interventions. These included light-touch,…
Descriptors: Evidence Based Practice, Literacy, Intervention, Educational Resources
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Friedman-Krauss, Allison H.; Connors, Maia C.; Morris, Pamela A. – Society for Research on Educational Effectiveness, 2013
As a result of the 1998 reauthorization of Head Start, the Department of Health and Human Services conducted a national evaluation of the Head Start program. The goal of Head Start is to improve the school readiness skills of low-income children in the United States. There is a substantial body of experimental and correlational research that has…
Descriptors: Early Intervention, Preschool Education, School Readiness, Low Income Groups