Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 10 |
Descriptor
Research Design | 12 |
Sample Size | 12 |
Statistical Analysis | 8 |
Correlation | 6 |
Educational Research | 5 |
Experiments | 4 |
Computation | 3 |
Error of Measurement | 3 |
Intervention | 3 |
Scores | 3 |
Comparative Analysis | 2 |
More ▼ |
Source
Journal of Educational and… | 12 |
Author
Hedges, Larry V. | 3 |
Schochet, Peter Z. | 3 |
Boik, Robert J. | 1 |
Borenstein, Michael | 1 |
Cope, Ronald T. | 1 |
Kelcey, Benjamin | 1 |
Moerbeek, Mirjam | 1 |
Peter Z. Schochet | 1 |
Rhoads, Christopher | 1 |
Safarkhani, Maryam | 1 |
Shen, Zuchao | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Research | 7 |
Reports - Evaluative | 4 |
Reports - Descriptive | 1 |
Education Level
Elementary Education | 2 |
Early Childhood Education | 1 |
Preschool Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Shen, Zuchao; Kelcey, Benjamin – Journal of Educational and Behavioral Statistics, 2020
Conventional optimal design frameworks consider a narrow range of sampling cost structures that thereby constrict their capacity to identify the most powerful and efficient designs. We relax several constraints of previous optimal design frameworks by allowing for variable sampling costs in cluster-randomized trials. The proposed framework…
Descriptors: Sampling, Research Design, Randomized Controlled Trials, Statistical Analysis
Rhoads, Christopher – Journal of Educational and Behavioral Statistics, 2017
Researchers designing multisite and cluster randomized trials of educational interventions will usually conduct a power analysis in the planning stage of the study. To conduct the power analysis, researchers often use estimates of intracluster correlation coefficients and effect sizes derived from an analysis of survey data. When there is…
Descriptors: Statistical Analysis, Hierarchical Linear Modeling, Surveys, Effect Size
Hedges, Larry V.; Borenstein, Michael – Journal of Educational and Behavioral Statistics, 2014
The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…
Descriptors: Experiments, Research Design, Sample Size, Correlation
Safarkhani, Maryam; Moerbeek, Mirjam – Journal of Educational and Behavioral Statistics, 2013
In a randomized controlled trial, a decision needs to be made about the total number of subjects for adequate statistical power. One way to increase the power of a trial is by including a predictive covariate in the model. In this article, the effects of various covariate adjustment strategies on increasing the power is studied for discrete-time…
Descriptors: Statistical Analysis, Scientific Methodology, Research Design, Sample Size
Hedges, Larry V. – Journal of Educational and Behavioral Statistics, 2011
Research designs involving cluster randomization are becoming increasingly important in educational and behavioral research. Many of these designs involve two levels of clustering or nesting (students within classes and classes within schools). Researchers would like to compute effect size indexes based on the standardized mean difference to…
Descriptors: Effect Size, Research Design, Experiments, Computation
Hedges, Larry V. – Journal of Educational and Behavioral Statistics, 2009
A common mistake in analysis of cluster randomized experiments is to ignore the effect of clustering and analyze the data as if each treatment group were a simple random sample. This typically leads to an overstatement of the precision of results and anticonservative conclusions about precision and statistical significance of treatment effects.…
Descriptors: Data Analysis, Statistical Significance, Statistics, Experiments
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2009
This article examines theoretical and empirical issues related to the statistical power of impact estimates under clustered regression discontinuity (RD) designs. The theory is grounded in the causal inference and hierarchical linear modeling literature, and the empirical work focuses on common designs used in education research to test…
Descriptors: Statistical Analysis, Regression (Statistics), Educational Research, Evaluation
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2008
This article examines theoretical and empirical issues related to the statistical power of impact estimates for experimental evaluations of education programs. The author considers designs where random assignment is conducted at the school, classroom, or student level, and employs a unified analytic framework using statistical methods from the…
Descriptors: Elementary School Students, Research Design, Standardized Tests, Program Evaluation

Boik, Robert J. – Journal of Educational and Behavioral Statistics, 1997
An analysis of repeated measures designs is proposed that uses an empirical Bayes estimator of the covariance matrix. The proposed analysis behaves like a univariate analysis when sample size is small or sphericity nearly satisfied, but behaves like multivariate analysis when sample size is large or sphericity is strongly violated. (SLD)
Descriptors: Bayesian Statistics, Estimation (Mathematics), Multivariate Analysis, Research Design

Zeng, Lingjia; Cope, Ronald T. – Journal of Educational and Behavioral Statistics, 1995
Large-sample standard errors of linear equating for the counterbalanced design are derived using the general delta method. Computer simulations found that standard errors derived without the normality assumption were more accurate than those derived with the normality assumption in a large sample with moderately skewed score distributions. (SLD)
Descriptors: Computer Simulation, Error of Measurement, Research Design, Sample Size