NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook; Yanli Xie; Dung Pham; Peilin Qiu; Ning Sui – Grantee Submission, 2024
Multisite trials that randomize individuals (e.g., students) within sites (e.g., schools) or clusters (e.g., teachers/classrooms) within sites (e.g., schools) are commonly used for program evaluation because they provide opportunities to learn about treatment effects as well as their heterogeneity across sites and subgroups (defined by moderating…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Educational Research, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Journal of Research on Educational Effectiveness, 2023
Evidence-based education aims to support policy makers choosing between potential interventions. This rarely involves considering each in isolation; instead, sets of evidence regarding many potential policy interventions are considered. Filtering a set on any quantity measured with error risks the "winner's curse": conditional on…
Descriptors: Effect Size, Educational Research, Evidence Based Practice, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Winnie Wing-Yee Tse; Hok Chio Lai – Society for Research on Educational Effectiveness, 2021
Background: Power analysis and sample size planning are key components in designing cluster randomized trials (CRTs), a common study design to test treatment effect by randomizing clusters or groups of individuals. Sample size determination in two-level CRTs requires knowledge of more than one design parameter, such as the effect size and the…
Descriptors: Sample Size, Bayesian Statistics, Randomized Controlled Trials, Research Design
Opper, Isaac M. – RAND Corporation, 2020
Researchers often include covariates when they analyze the results of randomized controlled trials (RCTs), valuing the increased precision of the estimates over the potential of inducing small-sample bias when doing so. In this paper, we develop a sufficient condition which ensures that the inclusion of covariates does not induce small-sample bias…
Descriptors: Artificial Intelligence, Man Machine Systems, Educational Technology, Technology Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Dong, Nianbo; Kelcey, Benjamin; Spybrook, Jessaca – Journal of Experimental Education, 2018
Researchers are often interested in whether the effects of an intervention differ conditional on individual- or group-moderator variables such as children's characteristics (e.g., gender), teacher's background (e.g., years of teaching), and school's characteristics (e.g., urbanity); that is, the researchers seek to examine for whom and under what…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Intervention, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2016
The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Effect Size, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2020
The What Works Clearinghouse (WWC) is an initiative of the U.S. Department of Education's Institute of Education Sciences (IES), which was established under the Education Sciences Reform Act of 2002. It is an important part of IES's strategy to use rigorous and relevant research, evaluation, and statistics to improve the nation's education system.…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Konstantopoulos, Spyros – Educational and Psychological Measurement, 2017
Field experiments in education frequently assign entire groups such as schools to treatment or control conditions. These experiments incorporate sometimes a longitudinal component where for example students are followed over time to assess differences in the average rate of linear change, or rate of acceleration. In this study, we provide methods…
Descriptors: Educational Experiments, Field Studies, Models, Randomized Controlled Trials
Westlund, Erik; Stuart, Elizabeth A. – American Journal of Evaluation, 2017
This article discusses the nonuse, misuse, and proper use of pilot studies in experimental evaluation research. The authors first show that there is little theoretical, practical, or empirical guidance available to researchers who seek to incorporate pilot studies into experimental evaluation research designs. The authors then discuss how pilot…
Descriptors: Use Studies, Pilot Projects, Evaluation Research, Experiments
E. C. Hedberg – Grantee Submission, 2016
Background: There is an increased focus on randomized trials for proximal behavioral outcomes in early childhood research. However, planning sample sizes for such designs requires extant information on the size of effect, variance decomposition, and effectiveness of covariates. Objectives: The purpose of this article is to employ a recent large…
Descriptors: Randomized Controlled Trials, Kindergarten, Children, Longitudinal Studies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2017
The What Works Clearinghouse (WWC) systematic review process is the basis of many of its products, enabling the WWC to use consistent, objective, and transparent standards and procedures in its reviews, while also ensuring comprehensive coverage of the relevant literature. The WWC systematic review process consists of five steps: (1) Developing…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dong, Nianbo – Society for Research on Educational Effectiveness, 2014
For intervention studies involving binary treatment variables, procedures for power analysis have been worked out and computerized estimation tools are generally available. The purpose of this study is to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval,…
Descriptors: Cluster Grouping, Randomized Controlled Trials, Statistical Analysis, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2014
Cluster randomized trials (CRTs), or studies in which intact groups of individuals are randomly assigned to a condition, are becoming more common in the evaluation of educational programs, policies, and practices. The website for the National Center for Education Evaluation and Regional Assistance (NCEE) reveals they have launched over 30…
Descriptors: Cluster Grouping, Randomized Controlled Trials, Statistical Analysis, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2011
With its critical assessments of scientific evidence on the effectiveness of education programs, policies, and practices (referred to as "interventions"), and a range of products summarizing this evidence, the What Works Clearinghouse (WWC) is an important part of the Institute of Education Sciences' strategy to use rigorous and relevant…
Descriptors: Standards, Access to Information, Information Management, Guides