NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20252
Since 20245
Since 2021 (last 5 years)15
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huibin Zhang; Zuchao Shen; Walter L. Leite – Journal of Experimental Education, 2025
Cluster-randomized trials have been widely used to evaluate the treatment effects of interventions on student outcomes. When interventions are implemented by teachers, researchers need to account for the nested structure in schools (i.e., students are nested within teachers nested within schools). Schools usually have a very limited number of…
Descriptors: Sample Size, Multivariate Analysis, Randomized Controlled Trials, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Joseph Taylor; Dung Pham; Paige Whitney; Jonathan Hood; Lamech Mbise; Qi Zhang; Jessaca Spybrook – Society for Research on Educational Effectiveness, 2023
Background: Power analyses for a cluster-randomized trial (CRT) require estimates of additional design parameters beyond those needed for an individually randomized trial. In a 2-level CRT, there are two sample sizes, the number of clusters and the number of individuals per cluster. The intraclass correlation (ICC), or the proportion of variance…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
James Soland – Journal of Research on Educational Effectiveness, 2024
When randomized control trials are not possible, quasi-experimental methods often represent the gold standard. One quasi-experimental method is difference-in-difference (DiD), which compares changes in outcomes before and after treatment across groups to estimate a causal effect. DiD researchers often use fairly exhaustive robustness checks to…
Descriptors: Item Response Theory, Testing, Test Validity, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L.; Zhang, Bixi; Li, Xintong – Journal of Research on Educational Effectiveness, 2023
Binary outcomes are often analyzed in cluster randomized trials (CRTs) using logistic regression and cluster robust standard errors (CRSEs) are routinely used to account for the dependent nature of nested data in such models. However, CRSEs can be problematic when the number of clusters is low (e.g., < 50) and, with CRTs, a low number of…
Descriptors: Robustness (Statistics), Error of Measurement, Regression (Statistics), Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wei Li; Yanli Xie; Dung Pham; Nianbo Dong; Jessaca Spybrook; Benjamin Kelcey – Asia Pacific Education Review, 2024
Cluster randomized trials (CRTs) are commonly used to evaluate the causal effects of educational interventions, where the entire clusters (e.g., schools) are randomly assigned to treatment or control conditions. This study introduces statistical methods for designing and analyzing two-level (e.g., students nested within schools) and three-level…
Descriptors: Research Design, Multivariate Analysis, Randomized Controlled Trials, Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Reagan Mozer; Luke Miratrix – Society for Research on Educational Effectiveness, 2023
Background: For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require each document first be manually coded for constructs of interest by trained human raters. These hand-coded scores are then used as a measured outcome for an impact analysis, with the average scores of the treatment group…
Descriptors: Artificial Intelligence, Coding, Randomized Controlled Trials, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Miriam Hattle; Joie Ensor; Katie Scandrett; Marienke van Middelkoop; Danielle A. van der Windt; Melanie A. Holden; Richard D. Riley – Research Synthesis Methods, 2024
Individual participant data (IPD) meta-analysis projects obtain, harmonise, and synthesise original data from multiple studies. Many IPD meta-analyses of randomised trials are initiated to identify treatment effect modifiers at the individual level, thus requiring statistical modelling of interactions between treatment effect and participant-level…
Descriptors: Meta Analysis, Randomized Controlled Trials, Outcomes of Treatment, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Adam Sales; Sooyong Lee; Tiffany Whittaker; Hyeon-Ah Kang – Society for Research on Educational Effectiveness, 2023
Background: The data revolution in education has led to more data collection, more randomized controlled trials (RCTs), and more data collection within RCTs. Often following IES recommendations, researchers studying program effectiveness gather data on how the intervention was implemented. Educational implementation data can be complex, including…
Descriptors: Program Implementation, Data Collection, Randomized Controlled Trials, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Journal of Experimental Education, 2022
In two-level designs, the total sample is a function of both the number of Level 2 clusters and the average number of Level 1 units per cluster. Traditional multilevel power calculations rely on either the arithmetic average or the harmonic mean when estimating the average number of Level 1 units across clusters of unbalanced size. The current…
Descriptors: Multivariate Analysis, Randomized Controlled Trials, Monte Carlo Methods, Sample Size
Eric C. Hedberg – Grantee Submission, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
E. C. Hedberg – American Journal of Evaluation, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Grantee Submission, 2021
Power in multilevel models remains an area of interest to both methodologists and substantive researchers. In two-level designs, the total sample is a function of both the number of level-2 (e.g., schools) clusters and the average number of level-1 (e.g., classrooms) units per cluster. Traditional multilevel power calculations rely on either the…
Descriptors: Multivariate Analysis, Randomized Controlled Trials, Monte Carlo Methods, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Steven Glazerman; Larissa Campuzano; Nancy Murray – Evaluation Review, 2025
Randomized experiments involving education interventions are typically implemented as cluster randomized trials, with schools serving as clusters. To design such a study, it is critical to understand the degree to which learning outcomes vary between versus within clusters (schools), specifically the intraclass correlation coefficient. It is also…
Descriptors: Educational Experiments, Foreign Countries, Educational Assessment, Research Design
Heather C. Hill; Anna Erickson – Annenberg Institute for School Reform at Brown University, 2021
Poor program implementation constitutes one explanation for null results in trials of educational interventions. For this reason, researchers often collect data about implementation fidelity when conducting such trials. In this article, we document whether and how researchers report and measure program fidelity in recent cluster-randomized trials.…
Descriptors: Fidelity, Program Effectiveness, Multivariate Analysis, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Uwimpuhwe, Germaine; Singh, Akansha; Higgins, Steve; Coux, Mickael; Xiao, ZhiMin; Shkedy, Ziv; Kasim, Adetayo – Journal of Experimental Education, 2022
Educational stakeholders are keen to know the magnitude and importance of different interventions. However, the way evidence is communicated to support understanding of the effectiveness of an intervention is controversial. Typically studies in education have used the standardised mean difference as a measure of the impact of interventions. This…
Descriptors: Program Effectiveness, Intervention, Multivariate Analysis, Bayesian Statistics