NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huibin Zhang; Zuchao Shen; Walter L. Leite – Journal of Experimental Education, 2025
Cluster-randomized trials have been widely used to evaluate the treatment effects of interventions on student outcomes. When interventions are implemented by teachers, researchers need to account for the nested structure in schools (i.e., students are nested within teachers nested within schools). Schools usually have a very limited number of…
Descriptors: Sample Size, Multivariate Analysis, Randomized Controlled Trials, Correlation
Eric C. Hedberg – Grantee Submission, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
E. C. Hedberg – American Journal of Evaluation, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Steven Glazerman; Larissa Campuzano; Nancy Murray – Evaluation Review, 2025
Randomized experiments involving education interventions are typically implemented as cluster randomized trials, with schools serving as clusters. To design such a study, it is critical to understand the degree to which learning outcomes vary between versus within clusters (schools), specifically the intraclass correlation coefficient. It is also…
Descriptors: Educational Experiments, Foreign Countries, Educational Assessment, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Kristin Porter; Luke Miratrix; Kristen Hunter – Society for Research on Educational Effectiveness, 2021
Background: Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs)…
Descriptors: Statistical Analysis, Hypothesis Testing, Computer Software, Randomized Controlled Trials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hedberg, E. C.; Hedges, L. V.; Kuyper, A. M. – Society for Research on Educational Effectiveness, 2015
Randomized experiments are generally considered to provide the strongest basis for causal inferences about cause and effect. Consequently randomized field trials have been increasingly used to evaluate the effects of education interventions, products, and services. Populations of interest in education are often hierarchically structured (such as…
Descriptors: Randomized Controlled Trials, Hierarchical Linear Modeling, Correlation, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Kelcey, Ben; Spybrook, Jessaca; Phelps, Geoffrey; Jones, Nathan; Zhang, Jiaqi – Journal of Experimental Education, 2017
We develop a theoretical and empirical basis for the design of teacher professional development studies. We build on previous work by (a) developing estimates of intraclass correlation coefficients for teacher outcomes using two- and three-level data structures, (b) developing estimates of the variance explained by covariates, and (c) modifying…
Descriptors: Faculty Development, Research Design, Teacher Effectiveness, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon – Journal of Research on Educational Effectiveness, 2016
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
Descriptors: Educational Research, Research Design, Intervention, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Westine, Carl D. – American Journal of Evaluation, 2016
Little is known empirically about intraclass correlations (ICCs) for multisite cluster randomized trial (MSCRT) designs, particularly in science education. In this study, ICCs suitable for science achievement studies using a three-level (students in schools in districts) MSCRT design that block on district are estimated and examined. Estimates of…
Descriptors: Efficiency, Evaluation Methods, Science Achievement, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2016
Experimental evaluations that involve the educational system usually involve a hierarchical structure (students are nested within classrooms that are nested within schools, etc.). Concerns about contamination, where research subjects receive certain features of an intervention intended for subjects in a different experimental group, have often led…
Descriptors: Educational Experiments, Error of Measurement, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cheung, Alan C. K.; Slavin, Robert E. – Educational Researcher, 2016
As evidence becomes increasingly important in educational policy, it is essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. A total of 645 studies from 12 recent reviews of evaluations of preschool, reading, mathematics, and science programs were studied. Effect…
Descriptors: Effect Size, Research Methodology, Research Design, Preschool Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Westine, Carl D. – Society for Research on Educational Effectiveness, 2015
A cluster-randomized trial (CRT) relies on random assignment of intact clusters to treatment conditions, such as classrooms or schools (Raudenbush & Bryk, 2002). One specific type of CRT, a multi-site CRT (MSCRT), is commonly employed in educational research and evaluation studies (Spybrook & Raudenbush, 2009; Spybrook, 2014; Bloom,…
Descriptors: Correlation, Randomized Controlled Trials, Science Achievement, Cluster Grouping
Peer reviewed Peer reviewed
Direct linkDirect link
Robertson, Clare; Ramsay, Craig; Gurung, Tara; Mowatt, Graham; Pickard, Robert; Sharma, Pawana – Research Synthesis Methods, 2014
We describe our experience of using a modified version of the Cochrane risk of bias (RoB) tool for randomised and non-randomised comparative studies. Objectives: (1) To assess time to complete RoB assessment; (2) To assess inter-rater agreement; and (3) To explore the association between RoB and treatment effect size. Methods: Cochrane risk of…
Descriptors: Risk, Randomized Controlled Trials, Research Design, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Westine, Carl; Spybrook, Jessaca – Society for Research on Educational Effectiveness, 2013
The capacity of the field to conduct power analyses for group randomized trials (GRTs) of educational interventions has improved over the past decade (Authors, 2009). However, a power analysis depends on estimates of design parameters. Hence it is critical to build the empirical base of design parameters for GRTs across a variety of outcomes and…
Descriptors: Randomized Controlled Trials, Research Design, Correlation, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen – American Journal of Evaluation, 2016
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
Descriptors: Intervention, Multivariate Analysis, Mixed Methods Research, Models