NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook; Yanli Xie; Dung Pham; Peilin Qiu; Ning Sui – Grantee Submission, 2024
Multisite trials that randomize individuals (e.g., students) within sites (e.g., schools) or clusters (e.g., teachers/classrooms) within sites (e.g., schools) are commonly used for program evaluation because they provide opportunities to learn about treatment effects as well as their heterogeneity across sites and subgroups (defined by moderating…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Educational Research, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Dong, Nianbo; Maynard, Rebecca A. – Journal of Educational and Behavioral Statistics, 2020
Cost-effectiveness analysis is a widely used educational evaluation tool. The randomized controlled trials that aim to evaluate the cost-effectiveness of the treatment are commonly referred to as randomized cost-effectiveness trials (RCETs). This study provides methods of power analysis for two-level multisite RCETs. Power computations take…
Descriptors: Statistical Analysis, Cost Effectiveness, Randomized Controlled Trials, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Cope, Bill; Kalantzis, Mary – Open Review of Educational Research, 2015
In this article, we argue that big data can offer new opportunities and roles for educational researchers. In the traditional model of evidence-gathering and interpretation in education, researchers are independent observers, who pre-emptively create instruments of measurement, and insert these into the educational process in specialized times and…
Descriptors: Data Collection, Data Interpretation, Evidence, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cheung, Alan; Slavin, Robert – Society for Research on Educational Effectiveness, 2016
As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. The purpose of this study was to examine how methodological features such as types of publication, sample sizes, and…
Descriptors: Effect Size, Evidence Based Practice, Educational Change, Educational Policy