Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 12 |
Descriptor
Source
National Center for Education… | 3 |
Society for Research on… | 3 |
What Works Clearinghouse | 2 |
Annenberg Institute for… | 1 |
Educational Administration… | 1 |
Educational Research and… | 1 |
International Journal of… | 1 |
Author
Schochet, Peter Z. | 4 |
Ben Styles | 1 |
Ben Weidmann | 1 |
Camburn, Eric M. | 1 |
Chiang, Hanley | 1 |
Deke, John | 1 |
Goldring, Ellen | 1 |
Huff, Jason | 1 |
Hugues Lortie-Forgues | 1 |
Jake Anders | 1 |
Kautz, Tim | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Reports - Descriptive | 5 |
Journal Articles | 3 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 3 | 1 |
Primary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sam Sims; Jake Anders; Matthew Inglis; Hugues Lortie-Forgues; Ben Styles; Ben Weidmann – Annenberg Institute for School Reform at Brown University, 2023
Over the last twenty years, education researchers have increasingly conducted randomised experiments with the goal of informing the decisions of educators and policymakers. Such experiments have generally employed broad, consequential, standardised outcome measures in the hope that this would allow decisionmakers to compare effectiveness of…
Descriptors: Educational Research, Research Methodology, Randomized Controlled Trials, Program Effectiveness
Kvernbekk, Tone – Educational Research and Evaluation, 2019
This paper discusses, compares, and contrasts 4 different models for bringing evidence from randomised controlled trials (RCTs) into practice and into practical reasoning. I look at what questions the models can and cannot answer, what role they accord to RCT evidence, and what their possible attraction for practitioners might be. The models are…
Descriptors: Role, Evidence Based Practice, Evidence, Models
Schweig, Jonathan David; Pane, John F. – International Journal of Research & Method in Education, 2016
Demands for scientific knowledge of what works in educational policy and practice has driven interest in quantitative investigations of educational outcomes, and randomized controlled trials (RCTs) have proliferated under these conditions. In educational settings, even when individuals are randomized, both experimental and control students are…
Descriptors: Randomized Controlled Trials, Educational Research, Multivariate Analysis, Models
VanHoudnos, Nathan – Society for Research on Educational Effectiveness, 2016
Cluster randomized experiments are ubiquitous in modern education research. Although a variety of modeling approaches are used to analyze these data, perhaps the most common methodology is a normal mixed effects model where some effects, such as the treatment effect, are regarded as fixed, and others, such as the effect of group random assignment…
Descriptors: Effect Size, Randomized Controlled Trials, Educational Experiments, Educational Research
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies. The impact estimators are derived using the building blocks of experimental designs with minimal assumptions, and have good statistical properties. The methods apply to randomized controlled trials (RCTs) and…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Kautz, Tim; Schochet, Peter Z.; Tilley, Charles – National Center for Education Evaluation and Regional Assistance, 2017
A new design-based theory has recently been developed to estimate impacts for randomized controlled trials (RCTs) and basic quasi-experimental designs (QEDs) for a wide range of designs used in social policy research (Imbens & Rubin, 2015; Schochet, 2016). These methods use the potential outcomes framework and known features of study designs…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies (Imbens and Rubin, 2015; Schochet, 2015, 2016). The estimators are derived using the building blocks of experimental designs with minimal assumptions, and are unbiased and normally distributed in large samples…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
What Works Clearinghouse, 2014
Attrition occurs when members of the initial research sample are not part of the final analysis sample, such as due to missing data or leaving the study. Both the overall sample attrition and the differences in attrition between the groups can affect the statistical equivalence of the sample and create potential for bias. The WWC has given careful…
Descriptors: Attrition (Research Studies), Statistical Bias, Randomized Controlled Trials, Models
Deke, John; Chiang, Hanley – Society for Research on Educational Effectiveness, 2014
Meeting the What Works Clearinghouse (WWC) attrition standard (or one of the attrition standards based on the WWC standard) is now an important consideration for researchers conducting studies that could potentially be reviewed by the WWC (or other evidence reviews). Understanding the basis of this standard is valuable for anyone seeking to meet…
Descriptors: Attrition (Research Studies), Student Attrition, Randomized Controlled Trials, Standards
What Works Clearinghouse, 2013
Attrition occurs when members of the initial research sample are not part of the final analysis sample, such as due to missing data or leaving the study. Both the overall sample attrition and the differences in attrition between the groups can affect the statistical equivalence of the sample and create potential for bias. The WWC has given careful…
Descriptors: Attrition (Research Studies), Statistical Bias, Randomized Controlled Trials, Models
Camburn, Eric M.; Goldring, Ellen; Sebastian, James; May, Henry; Huff, Jason – Educational Administration Quarterly, 2016
Purpose: The past decade has seen considerable debate about how to best evaluate the efficacy of educational improvement initiatives, and members of the educational leadership research community have entered the debate with great energy. Throughout this debate, the use of randomized experiments has been a particularly contentious subject. This…
Descriptors: Educational Administration, Educational Practices, Educational Improvement, Administrator Education
Schochet, Peter Z. – Society for Research on Educational Effectiveness, 2013
In randomized control trials (RCTs) of educational interventions, there is a growing literature on impact estimation methods to adjust for missing student outcome data using such methods as multiple imputation, the construction of nonresponse weights, casewise deletion, and maximum likelihood methods (see, for example, Allison, 2002; Graham, 2009;…
Descriptors: Control Groups, Experimental Groups, Educational Research, Data Analysis