NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ishita Ahmed; Masha Bertling; Lijin Zhang; Andrew Ho; Prashant Loyalka; Scott Rozelle; Ben Domingue – Society for Research on Educational Effectiveness, 2023
Background: Evidence from education randomized controlled trials (RCTs) in low- and middle-income countries (LMICs) demonstrates how interventions can improve children's educational achievement [1, 2, 3, 4]. RCTs assess the impact of an intervention by comparing outcomes--aggregate test scores--between treatment and control groups. A review of…
Descriptors: Randomized Controlled Trials, Educational Research, Outcome Measures, Research Design
Ishita Ahmed; Masha Bertling; Lijin Zhang; Andrew D. Ho; Prashant Loyalka; Hao Xue; Scott Rozelle; Benjamin W. Domingue – Annenberg Institute for School Reform at Brown University, 2023
Researchers use test outcomes to evaluate the effectiveness of education interventions across numerous randomized controlled trials (RCTs). Aggregate test data--for example, simple measures like the sum of correct responses--are compared across treatment and control groups to determine whether an intervention has had a positive impact on student…
Descriptors: Randomized Controlled Trials, Educational Research, Outcome Measures, Research Design
Lydia Bradford – ProQuest LLC, 2024
In randomized control trials (RCT), the recent focus has shifted to how an intervention yields positive results on its intended outcome. This aligns with the recent push of implementation science in healthcare (Bauer et al., 2015) but goes beyond this. RCTs have moved to evaluating the theoretical framing of the intervention as well as differing…
Descriptors: Hierarchical Linear Modeling, Mediation Theory, Randomized Controlled Trials, Research Design
K. L. Anglin; A. Krishnamachari; V. Wong – Grantee Submission, 2020
This article reviews important statistical methods for estimating the impact of interventions on outcomes in education settings, particularly programs that are implemented in field, rather than laboratory, settings. We begin by describing the causal inference challenge for evaluating program effects. Then four research designs are discussed that…
Descriptors: Causal Models, Statistical Inference, Intervention, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Stallasch, Sophie E.; Lüdtke, Oliver; Artelt, Cordula; Brunner, Martin – Journal of Research on Educational Effectiveness, 2021
To plan cluster-randomized trials with sufficient statistical power to detect intervention effects on student achievement, researchers need multilevel design parameters, including measures of between-classroom and between-school differences and the amounts of variance explained by covariates at the student, classroom, and school level. Previous…
Descriptors: Foreign Countries, Randomized Controlled Trials, Intervention, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cole, Russell; Deke, John; Seftor, Neil – Society for Research on Educational Effectiveness, 2016
The What Works Clearinghouse (WWC) maintains design standards to identify rigorous, internally valid education research. As education researchers advance new methodologies, the WWC must revise its standards to include an assessment of the new designs. Recently, the WWC has revised standards for two emerging study designs: regression discontinuity…
Descriptors: Educational Research, Research Design, Regression (Statistics), Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon – Journal of Research on Educational Effectiveness, 2016
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
Descriptors: Educational Research, Research Design, Intervention, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Westine, Carl D. – American Journal of Evaluation, 2016
Little is known empirically about intraclass correlations (ICCs) for multisite cluster randomized trial (MSCRT) designs, particularly in science education. In this study, ICCs suitable for science achievement studies using a three-level (students in schools in districts) MSCRT design that block on district are estimated and examined. Estimates of…
Descriptors: Efficiency, Evaluation Methods, Science Achievement, Correlation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Westine, Carl; Spybrook, Jessaca – Society for Research on Educational Effectiveness, 2013
The capacity of the field to conduct power analyses for group randomized trials (GRTs) of educational interventions has improved over the past decade (Authors, 2009). However, a power analysis depends on estimates of design parameters. Hence it is critical to build the empirical base of design parameters for GRTs across a variety of outcomes and…
Descriptors: Randomized Controlled Trials, Research Design, Correlation, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen – American Journal of Evaluation, 2016
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
Descriptors: Intervention, Multivariate Analysis, Mixed Methods Research, Models
Mills, Jonathan N.; Wolf, Patrick J. – School Choice Demonstration Project, 2016
The Louisiana Scholarship Program (LSP) is a statewide initiative offering publicly-funded vouchers to enroll in local private schools to students in low-performing schools with family income no greater than 250 percent of the poverty line. Initially established in 2008 as a pilot program in New Orleans, the LSP was expanded statewide in 2012.…
Descriptors: Academic Achievement, Educational Vouchers, Randomized Controlled Trials, Scholarships