NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2020
This article discusses estimation of average treatment effects for randomized controlled trials (RCTs) using grouped administrative data to help improve data access. The focus is on design-based estimators, derived using the building blocks of experiments, that are conducive to grouped data for a wide range of RCT designs, including clustered and…
Descriptors: Randomized Controlled Trials, Data Analysis, Research Design, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2013
In school-based randomized control trials (RCTs), a common design is to follow student cohorts over time. For such designs, education researchers usually focus on the place-based (PB) impact parameter, which is estimated using data collected on all students enrolled in the study schools at each data collection point. A potential problem with this…
Descriptors: Student Mobility, Scientific Methodology, Research Design, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z.; Chiang, Hanley S. – Journal of Educational and Behavioral Statistics, 2011
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…
Descriptors: Computation, Identification, Educational Research, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Evaluation Review, 2009
In social policy evaluations, the multiple testing problem occurs due to the many hypothesis tests that are typically conducted across multiple outcomes and subgroups, which can lead to spurious impact findings. This article discusses a framework for addressing this problem that balances Types I and II errors. The framework involves specifying…
Descriptors: Policy, Evaluation, Testing Problems, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2009
This article examines theoretical and empirical issues related to the statistical power of impact estimates under clustered regression discontinuity (RD) designs. The theory is grounded in the causal inference and hierarchical linear modeling literature, and the empirical work focuses on common designs used in education research to test…
Descriptors: Statistical Analysis, Regression (Statistics), Educational Research, Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2008
This article examines theoretical and empirical issues related to the statistical power of impact estimates for experimental evaluations of education programs. The author considers designs where random assignment is conducted at the school, classroom, or student level, and employs a unified analytic framework using statistical methods from the…
Descriptors: Elementary School Students, Research Design, Standardized Tests, Program Evaluation