NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Fangxing Bai; Ben Kelcey; Yanli Xie; Kyle Cox – Journal of Experimental Education, 2025
Prior research has suggested that clustered regression discontinuity designs are a formidable alternative to cluster randomized designs because they provide targeted treatment assignment while maintaining a high-quality basis for inferences on local treatment effects. However, methods for the design and analysis of clustered regression…
Descriptors: Regression (Statistics), Statistical Analysis, Research Design, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook – Journal of Experimental Education, 2024
Multisite cluster randomized trials (MCRTs), in which, the intermediate-level clusters (e.g., classrooms) are randomly assigned to the treatment or control condition within each site (e.g., school), are among the most commonly used experimental designs across a broad range of disciplines. MCRTs often align with the theory that programs are…
Descriptors: Research Design, Randomized Controlled Trials, Statistical Analysis, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Konstantopoulos, Spyros – Journal of Experimental Education, 2019
Education experiments frequently assign students to treatment or control conditions within schools. Longitudinal components added in these studies (e.g., students followed over time) allow researchers to assess treatment effects in average rates of change (e.g., linear or quadratic). We provide methods for a priori power analysis in three-level…
Descriptors: Research Design, Statistical Analysis, Sample Size, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Dong, Nianbo; Kelcey, Benjamin; Spybrook, Jessaca – Journal of Experimental Education, 2018
Researchers are often interested in whether the effects of an intervention differ conditional on individual- or group-moderator variables such as children's characteristics (e.g., gender), teacher's background (e.g., years of teaching), and school's characteristics (e.g., urbanity); that is, the researchers seek to examine for whom and under what…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Intervention, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher H.; Dye, Charles – Journal of Experimental Education, 2016
An important concern when planning research studies is to obtain maximum precision of an estimate of a treatment effect given a budget constraint. When research designs have a "multilevel" or "hierarchical" structure changes in sample size at different levels of the design will impact precision differently. Furthermore, there…
Descriptors: Research Design, Hierarchical Linear Modeling, Regression (Statistics), Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Hembry, Ian; Bunuan, Rommel; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim – Journal of Experimental Education, 2015
A multilevel logistic model for estimating a nonlinear trajectory in a multiple-baseline design is introduced. The model is applied to data from a real multiple-baseline design study to demonstrate interpretation of relevant parameters. A simple change-in-levels (?"Levels") model and a model involving a quadratic function…
Descriptors: Computation, Research Design, Data, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Stapleton, Laura M.; Pituch, Keenan A.; Dion, Eric – Journal of Experimental Education, 2015
This article presents 3 standardized effect size measures to use when sharing results of an analysis of mediation of treatment effects for cluster-randomized trials. The authors discuss 3 examples of mediation analysis (upper-level mediation, cross-level mediation, and cross-level mediation with a contextual effect) with demonstration of the…
Descriptors: Effect Size, Measurement Techniques, Statistical Analysis, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim – Journal of Experimental Education, 2014
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Descriptors: Effect Size, Statistical Bias, Sample Size, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2014
One approach for combining single-case data involves use of multilevel modeling. In this article, the authors use a Monte Carlo simulation study to inform applied researchers under which realistic conditions the three-level model is appropriate. The authors vary the value of the immediate treatment effect and the treatment's effect on the time…
Descriptors: Hierarchical Linear Modeling, Monte Carlo Methods, Case Studies, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Konstantopoulos, Spyros – Journal of Experimental Education, 2010
Previous work on statistical power has discussed mainly single-level designs or 2-level balanced designs with random effects. Although balanced experiments are common, in practice balance cannot always be achieved. Work on class size is one example of unbalanced designs. This study provides methods for power analysis in 2-level unbalanced designs…
Descriptors: Class Size, Computers, Statistical Analysis, Experiments