NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Porter, Kristin E.; Reardon, Sean F.; Unlu, Fatih; Bloom, Howard S.; Robinson-Cimpian, Joseph P. – MDRC, 2014
A valuable extension of the single-rating regression discontinuity design (RDD) is a multiple-rating RDD (MRRDD). To date, four main methods have been used to estimate average treatment effects at the multiple treatment frontiers of an MRRDD: the "surface" method, the "frontier" method, the "binding-score" method, and…
Descriptors: Regression (Statistics), Research Design, Quasiexperimental Design, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Olsen, Robert B.; Unlu, Fatih; Price, Cristofer; Jaciw, Andrew P. – National Center for Education Evaluation and Regional Assistance, 2011
This report examines the differences in impact estimates and standard errors that arise when these are derived using state achievement tests only (as pre-tests and post-tests), study-administered tests only, or some combination of state- and study-administered tests. State tests may yield different evaluation results relative to a test that is…
Descriptors: Achievement Tests, Standardized Tests, State Standards, Reading Achievement