NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hallberg, Kelly; Williams, Ryan; Swanlund, Andrew – Journal of Research on Educational Effectiveness, 2020
More aggregate data on school performance is available than ever before, opening up new possibilities for applied researchers interested in assessing the effectiveness of school-level interventions quickly and at a relatively low cost by implementing comparative interrupted times series (CITS) designs. We examine the extent to which effect…
Descriptors: Data Use, Research Methodology, Program Effectiveness, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Weiss, Michael J.; Lockwood, J. R.; McCaffrey, Daniel F. – Journal of Research on Educational Effectiveness, 2016
In the "individually randomized group treatment" (IRGT) experimental design, individuals are first randomly assigned to a treatment arm or a control arm, but then within each arm, are grouped together (e.g., within classrooms/schools, through shared case managers, in group therapy sessions, through shared doctors, etc.) to receive…
Descriptors: Randomized Controlled Trials, Error of Measurement, Control Groups, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Weiss, Michael J.; Bloom, Howard S.; Verbitsky-Savitz, Natalya; Gupta, Himani; Vigil, Alma E.; Cullinan, Daniel N. – Journal of Research on Educational Effectiveness, 2017
Multisite trials, in which individuals are randomly assigned to alternative treatment arms within sites, offer an excellent opportunity to estimate the cross-site average effect of treatment assignment (intent to treat or ITT) "and" the amount by which this impact varies across sites. Although both of these statistics are substantively…
Descriptors: Randomized Controlled Trials, Evidence, Models, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Nicole Bohme Carnegie; Masataka Harada; Jennifer L. Hill – Journal of Research on Educational Effectiveness, 2016
A major obstacle to developing evidenced-based policy is the difficulty of implementing randomized experiments to answer all causal questions of interest. When using a nonexperimental study, it is critical to assess how much the results could be affected by unmeasured confounding. We present a set of graphical and numeric tools to explore the…
Descriptors: Randomized Controlled Trials, Simulation, Evidence Based Practice, Barriers