NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew P. Jaciw – American Journal of Evaluation, 2025
By design, randomized experiments (XPs) rule out bias from confounded selection of participants into conditions. Quasi-experiments (QEs) are often considered second-best because they do not share this benefit. However, when results from XPs are used to generalize causal impacts, the benefit from unconfounded selection into conditions may be offset…
Descriptors: Elementary School Students, Elementary School Teachers, Generalization, Test Bias
Peer reviewed Peer reviewed
Kenneth A. Frank; Qinyun Lin; Spiro J. Maroulis – Grantee Submission, 2024
In the complex world of educational policy, causal inferences will be debated. As we review non-experimental designs in educational policy, we focus on how to clarify and focus the terms of debate. We begin by presenting the potential outcomes/counterfactual framework and then describe approximations to the counterfactual generated from the…
Descriptors: Causal Models, Statistical Inference, Observation, Educational Policy
Pashley, Nicole E.; Miratrix, Luke W. – Journal of Educational and Behavioral Statistics, 2021
Evaluating blocked randomized experiments from a potential outcomes perspective has two primary branches of work. The first focuses on larger blocks, with multiple treatment and control units in each block. The second focuses on matched pairs, with a single treatment and control unit in each block. These literatures not only provide different…
Descriptors: Causal Models, Statistical Inference, Research Methodology, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2022
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC)…
Descriptors: Program Evaluation, Program Effectiveness, Standards, Educational Research
Pashley, Nicole E.; Miratrix, Luke W. – Grantee Submission, 2019
In the causal inference literature, evaluating blocking from a potential outcomes perspective has two main branches of work. The first focuses on larger blocks, with multiple treatment and control units in each block. The second focuses on matched pairs, with a single treatment and control unit in each block. These literatures not only provide…
Descriptors: Causal Models, Statistical Inference, Research Methodology, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Joyce, Kathryn E.; Cartwright, Nancy – American Educational Research Journal, 2020
This article addresses the gap between what works in research and what works in practice. Currently, research in evidence-based education policy and practice focuses on randomized controlled trials. These can support causal ascriptions ("It worked") but provide little basis for local effectiveness predictions ("It will work…
Descriptors: Theory Practice Relationship, Educational Policy, Evidence Based Practice, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Hitchcock, John H.; Johnson, R. Burke; Schoonenboom, Judith – Research in the Schools, 2018
The central purpose of this article is to provide an overview of the many ways in which special educators can generate and think about causal inference to inform policy and practice. Consideration of causality across different lenses can be carried out by engaging in multiple method and mixed methods ways of thinking about inference. This article…
Descriptors: Causal Models, Statistical Inference, Special Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Nicole Bohme Carnegie; Masataka Harada; Jennifer L. Hill – Journal of Research on Educational Effectiveness, 2016
A major obstacle to developing evidenced-based policy is the difficulty of implementing randomized experiments to answer all causal questions of interest. When using a nonexperimental study, it is critical to assess how much the results could be affected by unmeasured confounding. We present a set of graphical and numeric tools to explore the…
Descriptors: Randomized Controlled Trials, Simulation, Evidence Based Practice, Barriers
Peer reviewed Peer reviewed
Direct linkDirect link
Cope, Bill; Kalantzis, Mary – Open Review of Educational Research, 2015
In this article, we argue that big data can offer new opportunities and roles for educational researchers. In the traditional model of evidence-gathering and interpretation in education, researchers are independent observers, who pre-emptively create instruments of measurement, and insert these into the educational process in specialized times and…
Descriptors: Data Collection, Data Interpretation, Evidence, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kelcey, Ben; Phelps, Geoffrey; Jones, Nathan – Society for Research on Educational Effectiveness, 2013
Teacher professional development (PD) is seen as critical to improving the quality of US schools (National Commission on Teaching and America's Future, 1997). PD is increasingly viewed as one of the primary levers for improving teaching quality and ultimately student achievement (Correnti, 2007). One factor that is driving interest in PD is…
Descriptors: Faculty Development, Educational Quality, Teacher Effectiveness, Educational Research