Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 6 |
Descriptor
Source
Journal of Research on… | 2 |
Educational Research and… | 1 |
Grantee Submission | 1 |
National Center for Education… | 1 |
Society for Research on… | 1 |
Author
Bell, Andrew | 2 |
Bloom, Howard | 2 |
Reiman, Kayla | 2 |
Anders, Jake | 1 |
Deke, John | 1 |
Inglis, Matthew | 1 |
Kautz, Tim | 1 |
Lortie-Forgues, Hugues | 1 |
Pustejovsky, James E. | 1 |
Simpson, Adrian | 1 |
Sims, Sam | 1 |
More ▼ |
Publication Type
Reports - Research | 5 |
Journal Articles | 3 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Education Level
Elementary Education | 2 |
High Schools | 2 |
Secondary Education | 2 |
Audience
Location
New York (New York) | 2 |
North Carolina | 2 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sims, Sam; Anders, Jake; Inglis, Matthew; Lortie-Forgues, Hugues – Journal of Research on Educational Effectiveness, 2023
Randomized controlled trials have proliferated in education, in part because they provide an unbiased estimator for the causal impact of interventions. It is increasingly recognized that many such trials in education have low power to detect an effect if indeed there is one. However, it is less well known that low powered trials tend to…
Descriptors: Randomized Controlled Trials, Educational Research, Effect Size, Intervention
Simpson, Adrian – Educational Research and Evaluation, 2018
Ainsworth et al.'s paper "Sources of Bias in Outcome Assessment in Randomised Controlled Trials: A Case Study" examines alternative accounts for a large difference in effect size between 2 outcomes in the same intervention evaluation. It argues that the probable explanation relates to masking: Only one outcome measure was administered by…
Descriptors: Statistical Bias, Randomized Controlled Trials, Effect Size, Outcome Measures
Bloom, Howard; Bell, Andrew; Reiman, Kayla – Journal of Research on Educational Effectiveness, 2020
This article assesses the likely generalizability of educational treatment-effect estimates from regression discontinuity designs (RDDs) when treatment assignment is based on academic pretest scores. Our assessment uses data on outcome and pretest measures from six educational experiments, ranging from preschool through high school, to estimate…
Descriptors: Data Use, Randomized Controlled Trials, Research Design, Regression (Statistics)
Bloom, Howard; Bell, Andrew; Reiman, Kayla – Grantee Submission, 2020
This article assesses the likely generalizability of educational treatment-effect estimates from regression discontinuity designs (RDDs) when treatment assignment is based on academic pretest scores. Our assessment uses data on outcome and pretest measures from six educational experiments, ranging from preschool through high school, to estimate…
Descriptors: Data Use, Randomized Controlled Trials, Research Design, Regression (Statistics)
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Tipton, Elizabeth; Pustejovsky, James E. – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Descriptors: Randomized Controlled Trials, Sample Size, Effect Size, Hypothesis Testing