Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 6 |
Descriptor
Comparative Analysis | 7 |
Research Design | 7 |
Statistical Analysis | 4 |
Computation | 3 |
Effect Size | 3 |
Control Groups | 2 |
Correlation | 2 |
Experimental Groups | 2 |
Monte Carlo Methods | 2 |
Sample Size | 2 |
Simulation | 2 |
More ▼ |
Source
Journal of Educational and… | 7 |
Author
Publication Type
Journal Articles | 7 |
Reports - Descriptive | 3 |
Reports - Research | 3 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Wong, Vivian C.; Steiner, Peter M.; Cook, Thomas D. – Journal of Educational and Behavioral Statistics, 2013
In a traditional regression-discontinuity design (RDD), units are assigned to treatment on the basis of a cutoff score and a continuous assignment variable. The treatment effect is measured at a single cutoff location along the assignment variable. This article introduces the multivariate regression-discontinuity design (MRDD), where multiple…
Descriptors: Computation, Research Design, Regression (Statistics), Multivariate Analysis
Safarkhani, Maryam; Moerbeek, Mirjam – Journal of Educational and Behavioral Statistics, 2013
In a randomized controlled trial, a decision needs to be made about the total number of subjects for adequate statistical power. One way to increase the power of a trial is by including a predictive covariate in the model. In this article, the effects of various covariate adjustment strategies on increasing the power is studied for discrete-time…
Descriptors: Statistical Analysis, Scientific Methodology, Research Design, Sample Size
Pustejovsky, James E.; Hedges, Larry V.; Shadish, William R. – Journal of Educational and Behavioral Statistics, 2014
In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general…
Descriptors: Hierarchical Linear Modeling, Effect Size, Maximum Likelihood Statistics, Computation
Rhoads, Christopher H. – Journal of Educational and Behavioral Statistics, 2011
Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…
Descriptors: Educational Research, Research Design, Effect Size, Experimental Groups
Viechtbauer, Wolfgang – Journal of Educational and Behavioral Statistics, 2007
Standardized effect sizes and confidence intervals thereof are extremely useful devices for comparing results across different studies using scales with incommensurable units. However, exact confidence intervals for standardized effect sizes can usually be obtained only via iterative estimation procedures. The present article summarizes several…
Descriptors: Intervals, Effect Size, Comparative Analysis, Monte Carlo Methods

Zimmerman, Donald W. – Journal of Educational and Behavioral Statistics, 1997
Paired-samples experimental designs are appropriate and widely used when there is a natural correspondence or pairing of scores. However, researchers must not fail to consider the implications of undetected correlation between supposedly independent samples in the absence of explicit pairing. (SLD)
Descriptors: Comparative Analysis, Correlation, Experiments, Research Design