Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 14 |
Descriptor
Source
Author
Avi Feller | 2 |
Benjamin Lu | 2 |
Deke, John | 2 |
Eli Ben-Michael | 2 |
Kautz, Tim | 2 |
Luke Miratrix | 2 |
Wei, Thomas | 2 |
Anderson-Clark, Helen | 1 |
Anna Erickson | 1 |
Anthony Gambino | 1 |
Barnow, Burt S. | 1 |
More ▼ |
Publication Type
Reports - Research | 7 |
Journal Articles | 4 |
Reports - Evaluative | 4 |
Guides - Non-Classroom | 2 |
Reports - Descriptive | 2 |
Education Level
Elementary Secondary Education | 3 |
Adult Education | 2 |
Early Childhood Education | 2 |
Elementary Education | 2 |
Grade 1 | 1 |
Primary Education | 1 |
Audience
Policymakers | 1 |
Researchers | 1 |
Location
United Kingdom (England) | 2 |
Laws, Policies, & Programs
Assessments and Surveys
Iowa Tests of Basic Skills | 1 |
What Works Clearinghouse Rating
Anthony Gambino – Society for Research on Educational Effectiveness, 2021
Analysis of symmetrically predicted endogenous subgroups (ASPES) is an approach to assessing heterogeneity in an ITT effect from a randomized experiment when an intermediate variable (one that is measured after random assignment and before outcomes) is hypothesized to be related to the ITT effect, but is only measured in one group. For example,…
Descriptors: Randomized Controlled Trials, Prediction, Program Evaluation, Credibility
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Journal of Educational and Behavioral Statistics, 2023
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Grantee Submission, 2022
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Heather C. Hill; Anna Erickson – Annenberg Institute for School Reform at Brown University, 2021
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Effectiveness, Multivariate Analysis, Randomized Controlled Trials
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Dawson, Anneka; Yeomans, Emily; Brown, Elena Rosa – Educational Research, 2018
Background: The Education Endowment Foundation (EEF) is an independent charity that was established in 2011 with the explicit aim of breaking the link between family income and educational achievement in England. Over the seven years since its inception, EEF has contributed to the existing evidence base by funding over one hundred randomised…
Descriptors: Foreign Countries, Educational Research, Randomized Controlled Trials, Research Problems
What Works Clearinghouse, 2016
This document provides step-by-step instructions on how to complete the Study Review Guide (SRG, Version S3, V2) for single-case designs (SCDs). Reviewers will complete an SRG for every What Works Clearinghouse (WWC) review. A completed SRG should be a reviewer's independent assessment of the study, relative to the criteria specified in the review…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2014
Cluster randomized trials (CRTs), or studies in which intact groups of individuals are randomly assigned to a condition, are becoming more common in the evaluation of educational programs, policies, and practices. The website for the National Center for Education Evaluation and Regional Assistance (NCEE) reveals they have launched over 30…
Descriptors: Cluster Grouping, Randomized Controlled Trials, Statistical Analysis, Computation
May, Henry; Sirinides, Philip; Gray, Abby; Davila, Heather Goldsworthy; Sam, Cecile; Blalock, Toscha; Blackman, Horatio; Anderson-Clark, Helen; Schiera, Andrew J. – Society for Research on Educational Effectiveness, 2015
As part of the 2010 economic stimulus, a $55 million "Investing in Innovation" (i3) grant from the US Department of Education was awarded to scale up Reading Recovery across the nation. This paper presents the final round of results from the large-scale, mixed methods randomized evaluation of the implementation and impacts of Reading…
Descriptors: Reading Programs, Program Evaluation, Reading Achievement, Mixed Methods Research
Cheung, Alan; Slavin, Robert – Society for Research on Educational Effectiveness, 2016
As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. The purpose of this study was to examine how methodological features such as types of publication, sample sizes, and…
Descriptors: Effect Size, Evidence Based Practice, Educational Change, Educational Policy
Gorard, Stephen; Siddiqui, Nadia; See, Beng Huat – Education Endowment Foundation, 2014
Response to Intervention (RTI) is a targeted programme that uses a tiered approach to identify the needs of low achieving pupils. The approach begins with whole class teaching (Tier 1), followed by small group tuition (Tier 2) for those who need more attention, and one to one tutoring (Tier 3) for those who do not respond to the small group…
Descriptors: Response to Intervention, Program Evaluation, Elementary School Students, Grade 5