NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Educational and Behavioral Statistics, 2017
Researchers designing multisite and cluster randomized trials of educational interventions will usually conduct a power analysis in the planning stage of the study. To conduct the power analysis, researchers often use estimates of intracluster correlation coefficients and effect sizes derived from an analysis of survey data. When there is…
Descriptors: Statistical Analysis, Hierarchical Linear Modeling, Surveys, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V.; Borenstein, Michael – Journal of Educational and Behavioral Statistics, 2014
The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…
Descriptors: Experiments, Research Design, Sample Size, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V. – Journal of Educational and Behavioral Statistics, 2011
Research designs involving cluster randomization are becoming increasingly important in educational and behavioral research. Many of these designs involve two levels of clustering or nesting (students within classes and classes within schools). Researchers would like to compute effect size indexes based on the standardized mean difference to…
Descriptors: Effect Size, Research Design, Experiments, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V. – Journal of Educational and Behavioral Statistics, 2009
A common mistake in analysis of cluster randomized experiments is to ignore the effect of clustering and analyze the data as if each treatment group were a simple random sample. This typically leads to an overstatement of the precision of results and anticonservative conclusions about precision and statistical significance of treatment effects.…
Descriptors: Data Analysis, Statistical Significance, Statistics, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2008
This article examines theoretical and empirical issues related to the statistical power of impact estimates for experimental evaluations of education programs. The author considers designs where random assignment is conducted at the school, classroom, or student level, and employs a unified analytic framework using statistical methods from the…
Descriptors: Elementary School Students, Research Design, Standardized Tests, Program Evaluation
Peer reviewed Peer reviewed
Thompson, Kenneth N.; Schumacker, Randall E. – Journal of Educational and Behavioral Statistics, 1997
Evaluation of the binomial effect size display (BESD), proposed as a format for presenting effect sizes associated with certain research, suggests that its application is limited to presenting results of 2 X 2 tables in which the overall binomial success rate is 50%. Problems with BESD use are explored. (SLD)
Descriptors: Correlation, Effect Size, Research Design
Peer reviewed Peer reviewed
Zimmerman, Donald W. – Journal of Educational and Behavioral Statistics, 1997
Paired-samples experimental designs are appropriate and widely used when there is a natural correspondence or pairing of scores. However, researchers must not fail to consider the implications of undetected correlation between supposedly independent samples in the absence of explicit pairing. (SLD)
Descriptors: Comparative Analysis, Correlation, Experiments, Research Design