Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 27 |
Since 2006 (last 20 years) | 29 |
Descriptor
Educational Research | 29 |
Program Evaluation | 29 |
Randomized Controlled Trials | 29 |
Intervention | 17 |
Program Effectiveness | 12 |
Research Design | 9 |
Evaluation Methods | 7 |
Research Methodology | 7 |
Educational Policy | 6 |
Effect Size | 6 |
Regression (Statistics) | 6 |
More ▼ |
Source
Author
Kautz, Tim | 3 |
Schochet, Peter Z. | 3 |
Anna Erickson | 2 |
Deke, John | 2 |
Heather C. Hill | 2 |
Hedges, Larry V. | 2 |
Inglis, Matthew | 2 |
Koutsouris, George | 2 |
Lortie-Forgues, Hugues | 2 |
May, Henry | 2 |
Norwich, Brahm | 2 |
More ▼ |
Publication Type
Journal Articles | 18 |
Reports - Research | 15 |
Reports - Descriptive | 6 |
Reports - Evaluative | 5 |
Information Analyses | 2 |
Guides - Non-Classroom | 1 |
Numerical/Quantitative Data | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Secondary Education | 8 |
Elementary Education | 4 |
Secondary Education | 2 |
Early Childhood Education | 1 |
Audience
Researchers | 2 |
Policymakers | 1 |
Practitioners | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
A. Brooks Bowden – AERA Open, 2023
Although experimental evaluations have been labeled the "gold standard" of evidence for policy (U.S. Department of Education, 2003), evaluations without an analysis of costs are not sufficient for policymaking (Monk, 1995; Ross et al., 2007). Funding organizations now require cost-effectiveness data in most evaluations of effects. Yet,…
Descriptors: Cost Effectiveness, Program Evaluation, Economics, Educational Finance
Hansford, Nathaniel; Schechter, Rachel L. – International Journal of Modern Education Studies, 2023
Meta-analyses are systematic summaries of research that use quantitative methods to find the mean effect size (standardized mean difference) for interventions. Critics of meta-analysis point out that such analyses can conflate the results of low- and high-quality studies, make improper comparisons and result in statistical noise. All these…
Descriptors: Meta Analysis, Best Practices, Randomized Controlled Trials, Criticism
Juan David Parra; D. Brent Edwards Jr. – Critical Studies in Education, 2024
This paper seeks to raise awareness among educational researchers and practitioners of some significant weaknesses and internal contradictions of randomised control trials (RCTs). Although critiques throughout the years from education scholars have pointed to the detrimental effects of this experimental approach on education practice and values,…
Descriptors: Randomized Controlled Trials, Evidence Based Practice, Educational Practices, Educational Policy
Troyer, Margaret – Journal of Research in Reading, 2022
Background: Randomised controlled trials (RCTs) have long been considered the gold standard in education research. Federal funds are allocated to evaluations that meet What Works Clearinghouse standards; RCT designs are required in order to meet these standards without reservations. Schools seek out interventions that are research based, in other…
Descriptors: Educational Research, Randomized Controlled Trials, Adolescents, Reading Instruction
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Heather C. Hill; Anna Erickson – Annenberg Institute for School Reform at Brown University, 2021
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Effectiveness, Multivariate Analysis, Randomized Controlled Trials
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
In this response, we first show that Simpson's proposed analysis answers a different and less interesting question than ours. We then justify the choice of prior for our Bayes factors calculations, but we also demonstrate that the substantive conclusions of our article are not substantially affected by varying this choice.
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
There are a growing number of large-scale educational randomized controlled trials (RCTs). Considering their expense, it is important to reflect on the effectiveness of this approach. We assessed the magnitude and precision of effects found in those large-scale RCTs commissioned by the UK-based Education Endowment Foundation and the U.S.-based…
Descriptors: Randomized Controlled Trials, Educational Research, Effect Size, Program Evaluation
Heather C. Hill; Anna Erickson – Educational Researcher, 2019
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Implementation, Program Effectiveness, Intervention
Norwich, Brahm; Koutsouris, George – International Journal of Research & Method in Education, 2020
This paper describes the context, processes and issues experienced over 5 years in which a RCT was carried out to evaluate a programme for children aged 7-8 who were struggling with their reading. Its specific aim is to illuminate questions about the design of complex teaching approaches and their evaluation using an RCT. This covers the early…
Descriptors: Randomized Controlled Trials, Program Evaluation, Reading Programs, Educational Research
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Hallberg, Kelly; Williams, Ryan; Swanlund, Andrew – Journal of Research on Educational Effectiveness, 2020
More aggregate data on school performance is available than ever before, opening up new possibilities for applied researchers interested in assessing the effectiveness of school-level interventions quickly and at a relatively low cost by implementing comparative interrupted times series (CITS) designs. We examine the extent to which effect…
Descriptors: Data Use, Research Methodology, Program Effectiveness, Design
Simpson, Adrian – Educational Researcher, 2019
A recent paper uses Bayes factors to argue a large minority of rigorous, large-scale education RCTs are "uninformative." The definition of "uninformative" depends on the authors' hypothesis choices for calculating Bayes factors. These arguably overadjust for effect size inflation and involve a fixed prior distribution,…
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
May, Henry; Jones, Akisha; Blakeney, Aly – AERA Online Paper Repository, 2019
Using an RD design provides statistically robust estimates while allowing researchers a different causal estimation tool to be used in educational environments where an RCT may not be feasible. Results from External Evaluation of the i3 Scale-Up of Reading Recovery show that impact estimates were remarkably similar between a randomized control…
Descriptors: Regression (Statistics), Research Design, Randomized Controlled Trials, Research Methodology
Previous Page | Next Page ยป
Pages: 1 | 2