Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 14 |
Descriptor
Source
Grantee Submission | 14 |
Author
A. Krishnamachari | 1 |
Anglin, Kylie L. | 1 |
Balsai, Michael | 1 |
Bell, Andrew | 1 |
Benjamin A. Motz | 1 |
Bloom, Howard | 1 |
Colin Hill | 1 |
Cromley, Jennifer | 1 |
Cynthia U. Norris | 1 |
Dai, Ting | 1 |
Diego del Blanco Orobitg | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 6 |
Reports - Research | 6 |
Journal Articles | 3 |
Information Analyses | 1 |
Opinion Papers | 1 |
Reference Materials -… | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Early Childhood Education | 1 |
Elementary Education | 1 |
High Schools | 1 |
Preschool Education | 1 |
Secondary Education | 1 |
Two Year Colleges | 1 |
Audience
Location
New York (New York) | 2 |
Indiana | 1 |
North Carolina | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Eric C. Hedberg – Grantee Submission, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Marie-Andrée Somers; Michael J. Weiss; Colin Hill – Grantee Submission, 2022
The last two decades have seen a dramatic increase in randomized controlled trials (RCTs) conducted in community colleges. Yet, there is limited empirical information on the design parameters necessary to plan the sample size for RCTs in this context. We provide empirical estimates of key design parameters, discussing lessons based on the pattern…
Descriptors: Randomized Controlled Trials, Research Design, Sample Size, Statistical Analysis

Kenneth A. Frank; Qinyun Lin; Spiro J. Maroulis – Grantee Submission, 2024
In the complex world of educational policy, causal inferences will be debated. As we review non-experimental designs in educational policy, we focus on how to clarify and focus the terms of debate. We begin by presenting the potential outcomes/counterfactual framework and then describe approximations to the counterfactual generated from the…
Descriptors: Causal Models, Statistical Inference, Observation, Educational Policy
Wilhelmina van Dijk; Cynthia U. Norris; Sara A. Hart – Grantee Submission, 2022
Randomized control trials are considered the pinnacle for causal inference. In many cases, however, randomization of participants in social work research studies is not feasible or ethical. This paper introduces the co-twin control design study as an alternative quasi-experimental design to provide evidence of causal mechanisms when randomization…
Descriptors: Twins, Research Design, Randomized Controlled Trials, Quasiexperimental Design
Xinran Li; Peng Ding; Donald B. Rubin – Grantee Submission, 2020
With many pretreatment covariates and treatment factors, the classical factorial experiment often fails to balance covariates across multiple factorial effects simultaneously. Therefore, it is intuitive to restrict the randomization of the treatment factors to satisfy certain covariate balance criteria, possibly conforming to the tiers of…
Descriptors: Experiments, Research Design, Randomized Controlled Trials, Sampling
Kaplan, Avi; Cromley, Jennifer; Perez, Tony; Dai, Ting; Mara, Kyle; Balsai, Michael – Grantee Submission, 2020
In this commentary, we complement other constructive critiques of educational randomized control trials (RCTs) by calling attention to the commonly ignored role of context in causal mechanisms undergirding educational phenomena. We argue that evidence for the central role of context in causal mechanisms challenges the assumption that RCT findings…
Descriptors: Context Effect, Educational Research, Randomized Controlled Trials, Causal Models
Benjamin A. Motz; Öykü Üner; Harmony E. Jankowski; Marcus A. Christie; Kim Burgas; Diego del Blanco Orobitg; Mark A. McDaniel – Grantee Submission, 2023
For researchers seeking to improve education, a common goal is to identify teaching practices that have causal benefits in classroom settings. To test whether an instructional practice exerts a causal influence on an outcome measure, the most straightforward and compelling method is to conduct an experiment. While experimentation is common in…
Descriptors: Learning Analytics, Experiments, Learning Processes, Learning Management Systems
K. L. Anglin; A. Krishnamachari; V. Wong – Grantee Submission, 2020
This article reviews important statistical methods for estimating the impact of interventions on outcomes in education settings, particularly programs that are implemented in field, rather than laboratory, settings. We begin by describing the causal inference challenge for evaluating program effects. Then four research designs are discussed that…
Descriptors: Causal Models, Statistical Inference, Intervention, Program Evaluation
Rachel Abenavoli; Natalia Rojas; Rebecca Unterman; Elise Cappella; Josh Wallack; Pamela Morris – Grantee Submission, 2021
In this article, Rachel Abenavoli, Natalia Rojas, Rebecca Unterman, Elise Cappella, Josh Wallack, and Pamela Morris argue that research-practice partnerships make it possible to rigorously study relevant policy questions in ways that would otherwise be infeasible. Randomized controlled trials of small-scale programs have shown us that early…
Descriptors: Educational Research, Early Childhood Education, Research Design, Preschool Education
Wong, Vivian C.; Steiner, Peter M.; Anglin, Kylie L. – Grantee Submission, 2018
Given the widespread use of non-experimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the…
Descriptors: Evaluation Methods, Program Evaluation, Program Effectiveness, Comparative Analysis
Bloom, Howard; Bell, Andrew; Reiman, Kayla – Grantee Submission, 2020
This article assesses the likely generalizability of educational treatment-effect estimates from regression discontinuity designs (RDDs) when treatment assignment is based on academic pretest scores. Our assessment uses data on outcome and pretest measures from six educational experiments, ranging from preschool through high school, to estimate…
Descriptors: Data Use, Randomized Controlled Trials, Research Design, Regression (Statistics)
Hedges, Larry V.; Schauer, Jacob – Grantee Submission, 2018
Background and purpose: Studies of education and learning that were described as experiments have been carried out in the USA by educational psychologists since about 1900. In this paper, we discuss the history of randomised trials in education in the USA in terms of five historical periods. In each period, the use of randomised trials was…
Descriptors: Randomized Controlled Trials, Educational Research, Educational Psychology, Educational History
Larry V. Hedges – Grantee Submission, 2017
The scientific rigor of education research has improved dramatically since the year 2000. Much of the credit for this improvement is deserved by Institute of Education Sciences (IES) policies that helped create a demand for rigorous research; increased human capital capacity to carry out such work; provided funding for the work itself; and…
Descriptors: Educational Research, Generalization, Intervention, Human Capital
Reardon, Sean F.; Raudenbush, Stephen W. – Grantee Submission, 2013
The increasing availability of data from multi-site randomized trials provides a potential opportunity to use instrumental variables methods to study the effects of multiple hypothesized mediators of the effect of a treatment. We derive nine assumptions needed to identify the effects of multiple mediators when using site-by-treatment interactions…
Descriptors: Causal Models, Measures (Individuals), Research Design, Context Effect