NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook; Yanli Xie; Dung Pham; Peilin Qiu; Ning Sui – Grantee Submission, 2024
Multisite trials that randomize individuals (e.g., students) within sites (e.g., schools) or clusters (e.g., teachers/classrooms) within sites (e.g., schools) are commonly used for program evaluation because they provide opportunities to learn about treatment effects as well as their heterogeneity across sites and subgroups (defined by moderating…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Educational Research, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, Seth; Song, Mengli; Cook, Thomas D.; Garet, Michael S. – American Educational Research Journal, 2023
This study examined bias reduction in the eight nonequivalent comparison group designs (NECGDs) that result from combining (a) choice of a local versus non-local comparison group, and analytic use or not of (b) a pretest measure of the study outcome and (c) a rich set of other covariates. Bias was estimated as the difference in causal estimate…
Descriptors: Research Design, Pretests Posttests, Computation, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Journal of Research on Educational Effectiveness, 2023
Evidence-based education aims to support policy makers choosing between potential interventions. This rarely involves considering each in isolation; instead, sets of evidence regarding many potential policy interventions are considered. Filtering a set on any quantity measured with error risks the "winner's curse": conditional on…
Descriptors: Effect Size, Educational Research, Evidence Based Practice, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2020
This article discusses estimation of average treatment effects for randomized controlled trials (RCTs) using grouped administrative data to help improve data access. The focus is on design-based estimators, derived using the building blocks of experiments, that are conducive to grouped data for a wide range of RCT designs, including clustered and…
Descriptors: Randomized Controlled Trials, Data Analysis, Research Design, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2018
Design-based methods have recently been developed as a way to analyze randomized controlled trial (RCT) data for designs with a single treatment and control group. This article builds on this framework to develop design-based estimators for evaluations with multiple research groups. Results are provided for a wide range of designs used in…
Descriptors: Randomized Controlled Trials, Computation, Educational Research, Experimental Groups
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter – Society for Research on Educational Effectiveness, 2018
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies (Freedman, 2008; Lin, 2013; Imbens and Rubin, 2015; Schochet, 2013, 2016; Yang and Tsiatis, 2001). The non-parametric estimators are derived using the building blocks of experimental designs with minimal…
Descriptors: Randomized Controlled Trials, Computation, Educational Research, Experimental Groups
Sales, Adam C.; Hansen, Ben B. – Journal of Educational and Behavioral Statistics, 2020
Conventionally, regression discontinuity analysis contrasts a univariate regression's limits as its independent variable, "R," approaches a cut point, "c," from either side. Alternative methods target the average treatment effect in a small region around "c," at the cost of an assumption that treatment assignment,…
Descriptors: Regression (Statistics), Computation, Statistical Inference, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca; Kelcey, Benjamin; Dong, Nianbo – Journal of Educational and Behavioral Statistics, 2016
Recently, there has been an increase in the number of cluster randomized trials (CRTs) to evaluate the impact of educational programs and interventions. These studies are often powered for the main effect of treatment to address the "what works" question. However, program effects may vary by individual characteristics or by context,…
Descriptors: Randomized Controlled Trials, Statistical Analysis, Computation, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2016
The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Effect Size, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2020
The What Works Clearinghouse (WWC) is an initiative of the U.S. Department of Education's Institute of Education Sciences (IES), which was established under the Education Sciences Reform Act of 2002. It is an important part of IES's strategy to use rigorous and relevant research, evaluation, and statistics to improve the nation's education system.…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kelcey, Ben – Society for Research on Educational Effectiveness, 2014
A common design in education research for interventions operating at a group or cluster level is a cluster randomized trial (CRT) (Bloom, 2005). In CRTs, intact clusters (e.g., schools) are assigned to treatment conditions rather than individuals (e.g., students) and are frequently an effective way to study interventions because they permit…
Descriptors: Cluster Grouping, Randomized Controlled Trials, Statistical Analysis, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2017
The What Works Clearinghouse (WWC) systematic review process is the basis of many of its products, enabling the WWC to use consistent, objective, and transparent standards and procedures in its reviews, while also ensuring comprehensive coverage of the relevant literature. The WWC systematic review process consists of five steps: (1) Developing…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies. The impact estimators are derived using the building blocks of experimental designs with minimal assumptions, and have good statistical properties. The methods apply to randomized controlled trials (RCTs) and…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Previous Page | Next Page »
Pages: 1  |  2