NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Descriptive13
Journal Articles8
Guides - Non-Classroom2
Education Level
Elementary Education1
Audience
Researchers1
Location
Tennessee1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 13 results Save | Export
A. Brooks Bowden – AERA Open, 2023
Although experimental evaluations have been labeled the "gold standard" of evidence for policy (U.S. Department of Education, 2003), evaluations without an analysis of costs are not sufficient for policymaking (Monk, 1995; Ross et al., 2007). Funding organizations now require cost-effectiveness data in most evaluations of effects. Yet,…
Descriptors: Cost Effectiveness, Program Evaluation, Economics, Educational Finance
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew P. Jaciw – American Journal of Evaluation, 2025
By design, randomized experiments (XPs) rule out bias from confounded selection of participants into conditions. Quasi-experiments (QEs) are often considered second-best because they do not share this benefit. However, when results from XPs are used to generalize causal impacts, the benefit from unconfounded selection into conditions may be offset…
Descriptors: Elementary School Students, Elementary School Teachers, Generalization, Test Bias
Sam Sims; Jake Anders; Matthew Inglis; Hugues Lortie-Forgues; Ben Styles; Ben Weidmann – Annenberg Institute for School Reform at Brown University, 2023
Over the last twenty years, education researchers have increasingly conducted randomised experiments with the goal of informing the decisions of educators and policymakers. Such experiments have generally employed broad, consequential, standardised outcome measures in the hope that this would allow decisionmakers to compare effectiveness of…
Descriptors: Educational Research, Research Methodology, Randomized Controlled Trials, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Taber, Keith S. – Studies in Science Education, 2019
Experimental studies are often employed to test the effectiveness of teaching innovations such as new pedagogy, curriculum, or learning resources. This article offers guidance on good practice in developing research designs, and in drawing conclusions from published reports. Random control trials potentially support the use of statistical…
Descriptors: Instructional Innovation, Educational Research, Research Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Joyce, Kathryn E.; Cartwright, Nancy – American Educational Research Journal, 2020
This article addresses the gap between what works in research and what works in practice. Currently, research in evidence-based education policy and practice focuses on randomized controlled trials. These can support causal ascriptions ("It worked") but provide little basis for local effectiveness predictions ("It will work…
Descriptors: Theory Practice Relationship, Educational Policy, Evidence Based Practice, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Moerbeek, Mirjam; Safarkhani, Maryam – Journal of Educational and Behavioral Statistics, 2018
Data from cluster randomized trials do not always have a pure hierarchical structure. For instance, students are nested within schools that may be crossed by neighborhoods, and soldiers are nested within army units that may be crossed by mental health-care professionals. It is important that the random cross-classification is taken into account…
Descriptors: Randomized Controlled Trials, Classification, Research Methodology, Military Personnel
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2017
"Attrition" is the loss of sample during the course of a study. It occurs when individuals initially randomly assigned in a study are not included when researchers examine the outcome of interest. Attrition is a common issue in education research, and it occurs for many reasons. The What Works Clearinghouse (WWC) is an initiative of the…
Descriptors: Attrition (Research Studies), Control Groups, Experimental Groups, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Hitchcock, John H.; Johnson, R. Burke; Schoonenboom, Judith – Research in the Schools, 2018
The central purpose of this article is to provide an overview of the many ways in which special educators can generate and think about causal inference to inform policy and practice. Consideration of causality across different lenses can be carried out by engaging in multiple method and mixed methods ways of thinking about inference. This article…
Descriptors: Causal Models, Statistical Inference, Special Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Slavin, Robert E.; Cheung, Alan C. K. – Journal of Education for Students Placed at Risk, 2017
Large-scale randomized studies provide the best means of evaluating practical, replicable approaches to improving educational outcomes. This article discusses the advantages, problems, and pitfalls of these evaluations, focusing on alternative methods of randomization, recruitment, ensuring high-quality implementation, dealing with attrition, and…
Descriptors: Randomized Controlled Trials, Evaluation Methods, Recruitment, Attrition (Research Studies)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2017
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies. The impact estimators are derived using the building blocks of experimental designs with minimal assumptions, and have good statistical properties. The methods apply to randomized controlled trials (RCTs) and…
Descriptors: Design, Randomized Controlled Trials, Quasiexperimental Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Reeves, Barnaby C.; Higgins, Julian P. T.; Ramsay, Craig; Shea, Beverley; Tugwell, Peter; Wells, George A. – Research Synthesis Methods, 2013
Background: Methods need to be further developed to include non-randomised studies (NRS) in systematic reviews of the effects of health care interventions. NRS are often required to answer questions about harms and interventions for which evidence from randomised controlled trials (RCTs) is not available. Methods used to review randomised…
Descriptors: Research Methodology, Research Design, Health Services, Workshops
Dijkers, Marcel P. J. M. – SEDL, 2011
This issue of "FOCUS" discusses external validity and what rehabilitation researchers can do to help practitioners answer the question "How far can we generalize this finding-- is it applicable to other clients/patients, with different characteristics, in dissimilar settings treated by other clinicians?," which clinicians and…
Descriptors: Rehabilitation, Generalization, Information Dissemination, Validity