Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 11 |
Descriptor
Source
Society for Research on… | 5 |
Journal of Educational and… | 3 |
Journal of Research on… | 2 |
Journal of Experimental… | 1 |
Author
Dong, Nianbo | 11 |
Spybrook, Jessaca | 8 |
Kelcey, Ben | 5 |
Kelcey, Benjamin | 3 |
Li, Wei | 2 |
Bai, Fangxing | 1 |
Bradshaw, Catherine P. | 1 |
Cox, Kyle | 1 |
Herman, Keith C. | 1 |
Maynarad, Rebecca | 1 |
Maynard, Rebecca A. | 1 |
More ▼ |
Publication Type
Reports - Research | 11 |
Journal Articles | 6 |
Education Level
Elementary Education | 1 |
Audience
Researchers | 1 |
Location
Maryland | 1 |
Missouri | 1 |
North Carolina | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Li, Wei; Dong, Nianbo; Maynarad, Rebecca; Spybrook, Jessaca; Kelcey, Ben – Journal of Research on Educational Effectiveness, 2023
Cluster randomized trials (CRTs) are commonly used to evaluate educational interventions, particularly their effectiveness. Recently there has been greater emphasis on using these trials to explore cost-effectiveness. However, methods for establishing the power of cluster randomized cost-effectiveness trials (CRCETs) are limited. This study…
Descriptors: Research Design, Statistical Analysis, Randomized Controlled Trials, Cost Effectiveness
Li, Wei; Dong, Nianbo; Maynard, Rebecca A. – Journal of Educational and Behavioral Statistics, 2020
Cost-effectiveness analysis is a widely used educational evaluation tool. The randomized controlled trials that aim to evaluate the cost-effectiveness of the treatment are commonly referred to as randomized cost-effectiveness trials (RCETs). This study provides methods of power analysis for two-level multisite RCETs. Power computations take…
Descriptors: Statistical Analysis, Cost Effectiveness, Randomized Controlled Trials, Educational Research
Kelcey, Ben; Spybrook, Jessaca; Dong, Nianbo; Bai, Fangxing – Journal of Research on Educational Effectiveness, 2020
Professional development for teachers is regarded as one of the principal pathways through which we can understand and cultivate effective teaching and improve student outcomes. A critical component of studies that seek to improve teaching through professional development is the detailed assessment of the intermediate teacher development processes…
Descriptors: Faculty Development, Educational Research, Randomized Controlled Trials, Research Design
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2017
The purpose of this paper is to present results of recent advances in power analyses to detect the moderator effects in Cluster Randomized Trials (CRTs). This paper focus on demonstration of the software PowerUp!-Moderator. This paper provides a resource for researchers seeking to design CRTs with adequate power to detect the moderator effects of…
Descriptors: Computer Software, Research Design, Randomized Controlled Trials, Statistical Analysis
Power for Detecting Treatment by Moderator Effects in Two- and Three-Level Cluster Randomized Trials
Spybrook, Jessaca; Kelcey, Benjamin; Dong, Nianbo – Journal of Educational and Behavioral Statistics, 2016
Recently, there has been an increase in the number of cluster randomized trials (CRTs) to evaluate the impact of educational programs and interventions. These studies are often powered for the main effect of treatment to address the "what works" question. However, program effects may vary by individual characteristics or by context,…
Descriptors: Randomized Controlled Trials, Statistical Analysis, Computation, Educational Research
Dong, Nianbo; Kelcey, Benjamin; Spybrook, Jessaca – Journal of Experimental Education, 2018
Researchers are often interested in whether the effects of an intervention differ conditional on individual- or group-moderator variables such as children's characteristics (e.g., gender), teacher's background (e.g., years of teaching), and school's characteristics (e.g., urbanity); that is, the researchers seek to examine for whom and under what…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Intervention, Effect Size
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2016
The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Effect Size, Computation
Spybrook, Jessaca; Kelcey, Ben; Dong, Nianbo – Society for Research on Educational Effectiveness, 2016
Cluster randomized trials (CRTs), or studies in which intact groups of individuals are randomly assigned to a condition, are becoming more common in evaluation studies of educational programs. A specific type of CRT in which clusters are randomly assigned to treatment within blocks or sites, known as multisite cluster randomized trials (MSCRTs),…
Descriptors: Statistical Analysis, Computation, Randomized Controlled Trials, Cluster Grouping
Kelcey, Benjamin; Dong, Nianbo; Spybrook, Jessaca; Cox, Kyle – Journal of Educational and Behavioral Statistics, 2017
Designs that facilitate inferences concerning both the total and indirect effects of a treatment potentially offer a more holistic description of interventions because they can complement "what works" questions with the comprehensive study of the causal connections implied by substantive theories. Mapping the sensitivity of designs to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Mediation Theory, Models
Dong, Nianbo; Reinke, Wendy M.; Herman, Keith C.; Bradshaw, Catherine P.; Murray, Desiree W. – Society for Research on Educational Effectiveness, 2015
Cluster randomized experiments are now widely used to examine intervention effects in prevention science. It is meaningful to use empirical benchmarks for interpreting effect size in prevention science. The effect size (i.e., the standardized mean difference, calculated by the difference of the means between the treatment and control groups,…
Descriptors: Effect Size, Correlation, Multivariate Analysis, Statistical Analysis
Dong, Nianbo – Society for Research on Educational Effectiveness, 2014
For intervention studies involving binary treatment variables, procedures for power analysis have been worked out and computerized estimation tools are generally available. The purpose of this study is to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval,…
Descriptors: Cluster Grouping, Randomized Controlled Trials, Statistical Analysis, Computation