NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Wong, Vivian C.; Steiner, Peter M.; Cook, Thomas D. – Journal of Educational and Behavioral Statistics, 2013
In a traditional regression-discontinuity design (RDD), units are assigned to treatment on the basis of a cutoff score and a continuous assignment variable. The treatment effect is measured at a single cutoff location along the assignment variable. This article introduces the multivariate regression-discontinuity design (MRDD), where multiple…
Descriptors: Computation, Research Design, Regression (Statistics), Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Safarkhani, Maryam; Moerbeek, Mirjam – Journal of Educational and Behavioral Statistics, 2013
In a randomized controlled trial, a decision needs to be made about the total number of subjects for adequate statistical power. One way to increase the power of a trial is by including a predictive covariate in the model. In this article, the effects of various covariate adjustment strategies on increasing the power is studied for discrete-time…
Descriptors: Statistical Analysis, Scientific Methodology, Research Design, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Pustejovsky, James E.; Hedges, Larry V.; Shadish, William R. – Journal of Educational and Behavioral Statistics, 2014
In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general…
Descriptors: Hierarchical Linear Modeling, Effect Size, Maximum Likelihood Statistics, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher H. – Journal of Educational and Behavioral Statistics, 2011
Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…
Descriptors: Educational Research, Research Design, Effect Size, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Viechtbauer, Wolfgang – Journal of Educational and Behavioral Statistics, 2007
Standardized effect sizes and confidence intervals thereof are extremely useful devices for comparing results across different studies using scales with incommensurable units. However, exact confidence intervals for standardized effect sizes can usually be obtained only via iterative estimation procedures. The present article summarizes several…
Descriptors: Intervals, Effect Size, Comparative Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Zimmerman, Donald W. – Journal of Educational and Behavioral Statistics, 1997
Paired-samples experimental designs are appropriate and widely used when there is a natural correspondence or pairing of scores. However, researchers must not fail to consider the implications of undetected correlation between supposedly independent samples in the absence of explicit pairing. (SLD)
Descriptors: Comparative Analysis, Correlation, Experiments, Research Design