NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests1
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huibin Zhang; Zuchao Shen; Walter L. Leite – Journal of Experimental Education, 2025
Cluster-randomized trials have been widely used to evaluate the treatment effects of interventions on student outcomes. When interventions are implemented by teachers, researchers need to account for the nested structure in schools (i.e., students are nested within teachers nested within schools). Schools usually have a very limited number of…
Descriptors: Sample Size, Multivariate Analysis, Randomized Controlled Trials, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Kyle Cox; Ben Kelcey; Hannah Luce – Journal of Experimental Education, 2024
Comprehensive evaluation of treatment effects is aided by considerations for moderated effects. In educational research, the combination of natural hierarchical structures and prevalence of group-administered or shared facilitator treatments often produces three-level partially nested data structures. Literature details planning strategies for a…
Descriptors: Randomized Controlled Trials, Monte Carlo Methods, Hierarchical Linear Modeling, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Steven Glazerman; Larissa Campuzano; Nancy Murray – Evaluation Review, 2025
Randomized experiments involving education interventions are typically implemented as cluster randomized trials, with schools serving as clusters. To design such a study, it is critical to understand the degree to which learning outcomes vary between versus within clusters (schools), specifically the intraclass correlation coefficient. It is also…
Descriptors: Educational Experiments, Foreign Countries, Educational Assessment, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Lydia Bradford – ProQuest LLC, 2024
In randomized control trials (RCT), the recent focus has shifted to how an intervention yields positive results on its intended outcome. This aligns with the recent push of implementation science in healthcare (Bauer et al., 2015) but goes beyond this. RCTs have moved to evaluating the theoretical framing of the intervention as well as differing…
Descriptors: Hierarchical Linear Modeling, Mediation Theory, Randomized Controlled Trials, Research Design
Heather C. Hill; Anna Erickson – Annenberg Institute for School Reform at Brown University, 2021
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Effectiveness, Multivariate Analysis, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Dong, Nianbo; Maynard, Rebecca A. – Journal of Educational and Behavioral Statistics, 2020
Cost-effectiveness analysis is a widely used educational evaluation tool. The randomized controlled trials that aim to evaluate the cost-effectiveness of the treatment are commonly referred to as randomized cost-effectiveness trials (RCETs). This study provides methods of power analysis for two-level multisite RCETs. Power computations take…
Descriptors: Statistical Analysis, Cost Effectiveness, Randomized Controlled Trials, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Kelcey, Ben; Spybrook, Jessaca; Dong, Nianbo; Bai, Fangxing – Journal of Research on Educational Effectiveness, 2020
Professional development for teachers is regarded as one of the principal pathways through which we can understand and cultivate effective teaching and improve student outcomes. A critical component of studies that seek to improve teaching through professional development is the detailed assessment of the intermediate teacher development processes…
Descriptors: Faculty Development, Educational Research, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Gagnon-Bartsch, J. A.; Sales, A. C.; Wu, E.; Botelho, A. F.; Erickson, J. A.; Miratrix, L. W.; Heffernan, N. T. – Grantee Submission, 2019
Randomized controlled trials (RCTs) admit unconfounded design-based inference--randomization largely justifies the assumptions underlying statistical effect estimates--but often have limited sample sizes. However, researchers may have access to big observational data on covariates and outcomes from RCT non-participants. For example, data from A/B…
Descriptors: Randomized Controlled Trials, Educational Research, Prediction, Algorithms
Zuchao Shen – ProQuest LLC, 2019
Multilevel experiments have been widely used in education and social sciences to evaluate causal effects of interventions. Two key considerations in designing experimental studies are statistical power and the minimal use of resources. Optimal design framework simultaneously addresses both considerations. This dissertation extends previous optimal…
Descriptors: Educational Research, Social Science Research, Research Design, Robustness (Statistics)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2020
The What Works Clearinghouse (WWC) is an initiative of the U.S. Department of Education's Institute of Education Sciences (IES), which was established under the Education Sciences Reform Act of 2002. It is an important part of IES's strategy to use rigorous and relevant research, evaluation, and statistics to improve the nation's education system.…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Wang, Yan; Ostrow, Korinn; Beck, Joseph; Heffernan, Neil – Grantee Submission, 2016
The focus of the learning analytics community bridges the gap between controlled educational research and data mining. Online learning platforms can be used to conduct randomized controlled trials to assist in the development of interventions that increase learning gains; datasets from such research can act as a treasure trove for inquisitive data…
Descriptors: Learning Analytics, Educational Research, Randomized Controlled Trials, Information Retrieval
Previous Page | Next Page »
Pages: 1  |  2