NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Garret J. Hall; Sophia Putzeys; Thomas R. Kratochwill; Joel R. Levin – Educational Psychology Review, 2024
Single-case experimental designs (SCEDs) have a long history in clinical and educational disciplines. One underdeveloped area in advancing SCED design and analysis is understanding the process of how internal validity threats and operational concerns are avoided or mitigated. Two strategies to ameliorate such issues in SCED involve replication and…
Descriptors: Research Design, Graphs, Case Studies, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Kenneth A. Frank; Qinyun Lin; Spiro J. Maroulis – Grantee Submission, 2024
In the complex world of educational policy, causal inferences will be debated. As we review non-experimental designs in educational policy, we focus on how to clarify and focus the terms of debate. We begin by presenting the potential outcomes/counterfactual framework and then describe approximations to the counterfactual generated from the…
Descriptors: Causal Models, Statistical Inference, Observation, Educational Policy
Wilhelmina van Dijk; Cynthia U. Norris; Sara A. Hart – Grantee Submission, 2022
Randomized control trials are considered the pinnacle for causal inference. In many cases, however, randomization of participants in social work research studies is not feasible or ethical. This paper introduces the co-twin control design study as an alternative quasi-experimental design to provide evidence of causal mechanisms when randomization…
Descriptors: Twins, Research Design, Randomized Controlled Trials, Quasiexperimental Design
Kaplan, Avi; Cromley, Jennifer; Perez, Tony; Dai, Ting; Mara, Kyle; Balsai, Michael – Educational Researcher, 2020
In this commentary, we complement other constructive critiques of educational randomized control trials (RCTs) by calling attention to the commonly ignored role of context in causal mechanisms undergirding educational phenomena. We argue that evidence for the central role of context in causal mechanisms challenges the assumption that RCT findings…
Descriptors: Context Effect, Educational Research, Randomized Controlled Trials, Causal Models
Kaplan, Avi; Cromley, Jennifer; Perez, Tony; Dai, Ting; Mara, Kyle; Balsai, Michael – Grantee Submission, 2020
In this commentary, we complement other constructive critiques of educational randomized control trials (RCTs) by calling attention to the commonly ignored role of context in causal mechanisms undergirding educational phenomena. We argue that evidence for the central role of context in causal mechanisms challenges the assumption that RCT findings…
Descriptors: Context Effect, Educational Research, Randomized Controlled Trials, Causal Models
K. L. Anglin; A. Krishnamachari; V. Wong – Grantee Submission, 2020
This article reviews important statistical methods for estimating the impact of interventions on outcomes in education settings, particularly programs that are implemented in field, rather than laboratory, settings. We begin by describing the causal inference challenge for evaluating program effects. Then four research designs are discussed that…
Descriptors: Causal Models, Statistical Inference, Intervention, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Reardon, Sean F.; Raudenbush, Stephen W. – Grantee Submission, 2013
The increasing availability of data from multi-site randomized trials provides a potential opportunity to use instrumental variables methods to study the effects of multiple hypothesized mediators of the effect of a treatment. We derive nine assumptions needed to identify the effects of multiple mediators when using site-by-treatment interactions…
Descriptors: Causal Models, Measures (Individuals), Research Design, Context Effect
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kelcey, Ben; Phelps, Geoffrey; Jones, Nathan – Society for Research on Educational Effectiveness, 2013
Teacher professional development (PD) is seen as critical to improving the quality of US schools (National Commission on Teaching and America's Future, 1997). PD is increasingly viewed as one of the primary levers for improving teaching quality and ultimately student achievement (Correnti, 2007). One factor that is driving interest in PD is…
Descriptors: Faculty Development, Educational Quality, Teacher Effectiveness, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Cook, Thomas D. – Journal of Policy Analysis and Management, 2013
The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…
Descriptors: Regression (Statistics), Research Design, Statistical Analysis, Research Problems