Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Author
Publication Type
Reports - Research | 4 |
Information Analyses | 2 |
Journal Articles | 2 |
Numerical/Quantitative Data | 2 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Education Level
Elementary Secondary Education | 1 |
Grade 10 | 1 |
Grade 3 | 1 |
Grade 5 | 1 |
Grade 8 | 1 |
Audience
Policymakers | 1 |
Practitioners | 1 |
Researchers | 1 |
Students | 1 |
Teachers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Weiss, Michael J.; Bloom, Howard S.; Verbitsky-Savitz, Natalya; Gupta, Himani; Vigil, Alma E.; Cullinan, Daniel N. – Journal of Research on Educational Effectiveness, 2017
Multisite trials, in which individuals are randomly assigned to alternative treatment arms within sites, offer an excellent opportunity to estimate the cross-site average effect of treatment assignment (intent to treat or ITT) "and" the amount by which this impact varies across sites. Although both of these statistics are substantively…
Descriptors: Randomized Controlled Trials, Evidence, Models, Intervention
Porter, Kristin E.; Reardon, Sean F.; Unlu, Fatih; Bloom, Howard S.; Robinson-Cimpian, Joseph P. – MDRC, 2014
A valuable extension of the single-rating regression discontinuity design (RDD) is a multiple-rating RDD (MRRDD). To date, four main methods have been used to estimate average treatment effects at the multiple treatment frontiers of an MRRDD: the "surface" method, the "frontier" method, the "binding-score" method, and…
Descriptors: Regression (Statistics), Research Design, Quasiexperimental Design, Research Methodology
Bloom, Howard S.; Porter, Kristin E.; Weiss, Michael J.; Raudenbush, Stephen – Society for Research on Educational Effectiveness, 2013
To date, evaluation research and policy analysis have focused mainly on average program impacts and paid little systematic attention to their variation. Recently, the growing number of multi-site randomized trials that are being planned and conducted make it increasingly feasible to study "cross-site" variation in impacts. Important…
Descriptors: Research Methodology, Policy, Evaluation Research, Randomized Controlled Trials
Bloom, Howard S. – MDRC, 2006
This chapter examines the core analytic elements of randomized experiments for social research. Its goal is to provide a compact discussion for faculty members, graduate students, and applied researchers of the design and analysis of randomized experiments for measuring the impacts of social or educational interventions. Design issues considered…
Descriptors: Research Methodology, Research Design, Experiments, Social Science Research

Bloom, Howard S. – Evaluation Review, 2002
Introduces an new approach for measuring the impact of whole school reforms. The approach, based on "short" interrupted time-series analysis, is explained, its statistical procedures are outlined, and how it was used in the evaluation of a major whole-school reform, Accelerated Schools is described (H. Bloom and others, 2001). (SLD)
Descriptors: Educational Change, Elementary Education, Evaluation Methods, Research Design
Bloom, Howard S.; Richburg-Hayes, Lashawn; Black, Alison Rebeck – MDRC, 2005
This paper examines how controlling statistically for baseline covariates (especially pretests) improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement. Part I of the paper introduces the concepts, issues, and options involved. Parts II and III present empirical findings…
Descriptors: Program Effectiveness, Reading Achievement, Mathematics Achievement, Research Methodology
Bloom, Howard S.; Michalopoulos, Charles; Hill, Carolyn J.; Lei, Ying – 2002
A study explored which nonexperimental comparison group methods provide the most accurate estimates of the impacts of mandatory welfare-to-work programs and whether the best methods work well enough to substitute for random assignment experiments. Findings were compared for nonexperimental comparison groups and statistical adjustment procedures…
Descriptors: Adult Education, Comparative Analysis, Control Groups, Error of Measurement