NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Wendy Chan; Larry Vernon Hedges – Journal of Educational and Behavioral Statistics, 2022
Multisite field experiments using the (generalized) randomized block design that assign treatments to individuals within sites are common in education and the social sciences. Under this design, there are two possible estimands of interest and they differ based on whether sites or blocks have fixed or random effects. When the average treatment…
Descriptors: Research Design, Educational Research, Statistical Analysis, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2020
This article discusses estimation of average treatment effects for randomized controlled trials (RCTs) using grouped administrative data to help improve data access. The focus is on design-based estimators, derived using the building blocks of experiments, that are conducive to grouped data for a wide range of RCT designs, including clustered and…
Descriptors: Randomized Controlled Trials, Data Analysis, Research Design, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Educational and Behavioral Statistics, 2017
Researchers designing multisite and cluster randomized trials of educational interventions will usually conduct a power analysis in the planning stage of the study. To conduct the power analysis, researchers often use estimates of intracluster correlation coefficients and effect sizes derived from an analysis of survey data. When there is…
Descriptors: Statistical Analysis, Hierarchical Linear Modeling, Surveys, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher H. – Journal of Educational and Behavioral Statistics, 2011
Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…
Descriptors: Educational Research, Research Design, Effect Size, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z.; Chiang, Hanley S. – Journal of Educational and Behavioral Statistics, 2011
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…
Descriptors: Computation, Identification, Educational Research, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V. – Journal of Educational and Behavioral Statistics, 2009
A common mistake in analysis of cluster randomized experiments is to ignore the effect of clustering and analyze the data as if each treatment group were a simple random sample. This typically leads to an overstatement of the precision of results and anticonservative conclusions about precision and statistical significance of treatment effects.…
Descriptors: Data Analysis, Statistical Significance, Statistics, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2009
This article examines theoretical and empirical issues related to the statistical power of impact estimates under clustered regression discontinuity (RD) designs. The theory is grounded in the causal inference and hierarchical linear modeling literature, and the empirical work focuses on common designs used in education research to test…
Descriptors: Statistical Analysis, Regression (Statistics), Educational Research, Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2008
This article examines theoretical and empirical issues related to the statistical power of impact estimates for experimental evaluations of education programs. The author considers designs where random assignment is conducted at the school, classroom, or student level, and employs a unified analytic framework using statistical methods from the…
Descriptors: Elementary School Students, Research Design, Standardized Tests, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V. – Journal of Educational and Behavioral Statistics, 2007
Multisite research designs involving cluster randomization are becoming increasingly important in educational and behavioral research. Researchers would like to compute effect size indexes based on the standardized mean difference to compare the results of cluster-randomized studies (and corresponding quasi-experiments) with other studies and to…
Descriptors: Journal Articles, Effect Size, Computation, Research Design