Publication Date
In 2025 | 1 |
Since 2024 | 7 |
Since 2021 (last 5 years) | 19 |
Since 2016 (last 10 years) | 40 |
Since 2006 (last 20 years) | 46 |
Descriptor
Source
Author
Deke, John | 3 |
Kautz, Tim | 3 |
Wei, Thomas | 3 |
Dong, Nianbo | 2 |
Elise Cappella | 2 |
Josh Wallack | 2 |
Larry V. Hedges | 2 |
Natalia Rojas | 2 |
Pamela Morris | 2 |
Rachel Abenavoli | 2 |
Rebecca Unterman | 2 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Bayley Mental Development… | 1 |
Bayley Scales of Infant… | 1 |
Gates MacGinitie Reading Tests | 1 |
What Works Clearinghouse Rating
William Herbert Yeaton – International Journal of Research & Method in Education, 2024
Though previously unacknowledged, a SMART (Sequential Multiple Assignment Randomized Trial) design uses both regression discontinuity (RD) and randomized controlled trial (RCT) designs. This combination structure creates a conceptual symbiosis between the two designs that enables both RCT- and previously unrecognized, RD-based inferential claims.…
Descriptors: Research Design, Randomized Controlled Trials, Regression (Statistics), Inferences
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Zuchao Shen; Ben Kelcey – Society for Research on Educational Effectiveness, 2023
I. Purpose of the Study: Detecting whether interventions work or not (through main effect analysis) can provide empirical evidence regarding the causal linkage between malleable factors (e.g., interventions) and learner outcomes. In complement, moderation analyses help delineate for whom and under what conditions intervention effects are most…
Descriptors: Intervention, Program Effectiveness, Evidence, Research Design
Wei Li; Yanli Xie; Dung Pham; Nianbo Dong; Jessaca Spybrook; Benjamin Kelcey – Asia Pacific Education Review, 2024
Cluster randomized trials (CRTs) are commonly used to evaluate the causal effects of educational interventions, where the entire clusters (e.g., schools) are randomly assigned to treatment or control conditions. This study introduces statistical methods for designing and analyzing two-level (e.g., students nested within schools) and three-level…
Descriptors: Research Design, Multivariate Analysis, Randomized Controlled Trials, Hierarchical Linear Modeling
Timothy Lycurgus; Daniel Almirall – Society for Research on Educational Effectiveness, 2024
Background: Education scientists are increasingly interested in constructing interventions that are adaptive over time to suit the evolving needs of students, classrooms, or schools. Such "adaptive interventions" (also referred to as dynamic treatment regimens or dynamic instructional regimes) determine which treatment should be offered…
Descriptors: Educational Research, Research Design, Randomized Controlled Trials, Intervention
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Huey T. Chen; Liliana Morosanu; Victor H. Chen – Asia Pacific Journal of Education, 2024
The Campbellian validity typology has been used as a foundation for outcome evaluation and for developing evidence-based interventions for decades. As such, randomized control trials were preferred for outcome evaluation. However, some evaluators disagree with the validity typology's argument that randomized controlled trials as the best design…
Descriptors: Evaluation Methods, Systems Approach, Intervention, Evidence Based Practice
Brown, Seth; Song, Mengli; Cook, Thomas D.; Garet, Michael S. – American Educational Research Journal, 2023
This study examined bias reduction in the eight nonequivalent comparison group designs (NECGDs) that result from combining (a) choice of a local versus non-local comparison group, and analytic use or not of (b) a pretest measure of the study outcome and (c) a rich set of other covariates. Bias was estimated as the difference in causal estimate…
Descriptors: Research Design, Pretests Posttests, Computation, Bias
Paul Thompson; Kaydee Owen; Richard P. Hastings – International Journal of Research & Method in Education, 2024
Traditionally, cluster randomized controlled trials are analyzed with the average intervention effect of interest. However, in populations that contain higher degrees of heterogeneity or variation may differ across different values of a covariate, which may not be optimal. Within education and social science contexts, exploring the variation in…
Descriptors: Randomized Controlled Trials, Intervention, Mathematics Education, Mathematics Skills
Li, Wei; Konstantopoulos, Spyros – Educational and Psychological Measurement, 2023
Cluster randomized control trials often incorporate a longitudinal component where, for example, students are followed over time and student outcomes are measured repeatedly. Besides examining how intervention effects induce changes in outcomes, researchers are sometimes also interested in exploring whether intervention effects on outcomes are…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Longitudinal Studies, Hierarchical Linear Modeling
Patterson, Charity G.; Leland, Natalie E.; Mormer, Elaine; Palmer, Catherine V. – Journal of Speech, Language, and Hearing Research, 2022
Purpose: Individual-randomized trials are the gold standard for testing the efficacy and effectiveness of drugs, devices, and behavioral interventions. Health care delivery, educational, and programmatic interventions are often complex, involving multiple levels of change and measurement precluding individual randomization for testing.…
Descriptors: Speech Language Pathology, Randomized Controlled Trials, Intervention, Speech Therapy
What Works Clearinghouse, 2021
The What Works Clearinghouse (WWC) identifies existing research on educational interventions, assesses the quality of the research, and summarizes and disseminates the evidence from studies that meet WWC standards. The WWC aims to provide enough information so educators can use the research to make informed decisions in their settings. This…
Descriptors: Program Effectiveness, Intervention, Educational Research, Educational Quality
Wu, Edward; Gagnon-Bartsch, Johann A. – Journal of Educational and Behavioral Statistics, 2021
In paired experiments, participants are grouped into pairs with similar characteristics, and one observation from each pair is randomly assigned to treatment. The resulting treatment and control groups should be well-balanced; however, there may still be small chance imbalances. Building on work for completely randomized experiments, we propose a…
Descriptors: Experiments, Groups, Research Design, Statistical Analysis
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
K. L. Anglin; A. Krishnamachari; V. Wong – Grantee Submission, 2020
This article reviews important statistical methods for estimating the impact of interventions on outcomes in education settings, particularly programs that are implemented in field, rather than laboratory, settings. We begin by describing the causal inference challenge for evaluating program effects. Then four research designs are discussed that…
Descriptors: Causal Models, Statistical Inference, Intervention, Program Evaluation