Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 19 |
Since 2006 (last 20 years) | 35 |
Descriptor
Source
Society for Research on… | 35 |
Author
Kelcey, Ben | 3 |
Ben Kelcey | 2 |
Deke, John | 2 |
Hedberg, E. C. | 2 |
Heppen, Jessica | 2 |
Jones, Nathan | 2 |
Peter Schochet | 2 |
Phelps, Geoffrey | 2 |
Spybrook, Jessaca | 2 |
Steiner, Peter M. | 2 |
Alan Ruby | 1 |
More ▼ |
Publication Type
Reports - Research | 32 |
Numerical/Quantitative Data | 2 |
Reports - Evaluative | 2 |
Information Analyses | 1 |
Education Level
Secondary Education | 11 |
Elementary Education | 9 |
Middle Schools | 7 |
High Schools | 6 |
Junior High Schools | 6 |
Intermediate Grades | 4 |
Early Childhood Education | 3 |
Grade 4 | 3 |
Grade 5 | 3 |
Elementary Secondary Education | 2 |
Grade 3 | 2 |
More ▼ |
Audience
Researchers | 2 |
Policymakers | 1 |
Location
Illinois | 4 |
Massachusetts | 2 |
Texas | 2 |
Arizona | 1 |
Arkansas | 1 |
Cambodia | 1 |
Colorado | 1 |
Kentucky | 1 |
Louisiana | 1 |
Minnesota | 1 |
North Carolina | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Massachusetts Comprehensive… | 1 |
What Works Clearinghouse Rating
Ben Kelcey; Fangxing Bai; Amota Ataneka; Yanli Xie; Kyle Cox – Society for Research on Educational Effectiveness, 2024
We consider a class of multiple-group individually-randomized group trials (IRGTs) that introduces a (partially) cross-classified structure in the treatment condition (only). The novel feature of this design is that the nature of the treatment induces a clustering structure that involves two or more non-nested groups among individuals in the…
Descriptors: Randomized Controlled Trials, Research Design, Statistical Analysis, Error of Measurement
Joseph Taylor; Dung Pham; Paige Whitney; Jonathan Hood; Lamech Mbise; Qi Zhang; Jessaca Spybrook – Society for Research on Educational Effectiveness, 2023
Background: Power analyses for a cluster-randomized trial (CRT) require estimates of additional design parameters beyond those needed for an individually randomized trial. In a 2-level CRT, there are two sample sizes, the number of clusters and the number of individuals per cluster. The intraclass correlation (ICC), or the proportion of variance…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Zuchao Shen; Ben Kelcey – Society for Research on Educational Effectiveness, 2023
I. Purpose of the Study: Detecting whether interventions work or not (through main effect analysis) can provide empirical evidence regarding the causal linkage between malleable factors (e.g., interventions) and learner outcomes. In complement, moderation analyses help delineate for whom and under what conditions intervention effects are most…
Descriptors: Intervention, Program Effectiveness, Evidence, Research Design
Ishita Ahmed; Masha Bertling; Lijin Zhang; Andrew Ho; Prashant Loyalka; Scott Rozelle; Ben Domingue – Society for Research on Educational Effectiveness, 2023
Background: Evidence from education randomized controlled trials (RCTs) in low- and middle-income countries (LMICs) demonstrates how interventions can improve children's educational achievement [1, 2, 3, 4]. RCTs assess the impact of an intervention by comparing outcomes--aggregate test scores--between treatment and control groups. A review of…
Descriptors: Randomized Controlled Trials, Educational Research, Outcome Measures, Research Design
Timothy Lycurgus; Daniel Almirall – Society for Research on Educational Effectiveness, 2024
Background: Education scientists are increasingly interested in constructing interventions that are adaptive over time to suit the evolving needs of students, classrooms, or schools. Such "adaptive interventions" (also referred to as dynamic treatment regimens or dynamic instructional regimes) determine which treatment should be offered…
Descriptors: Educational Research, Research Design, Randomized Controlled Trials, Intervention
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Winnie Wing-Yee Tse; Hok Chio Lai – Society for Research on Educational Effectiveness, 2021
Background: Power analysis and sample size planning are key components in designing cluster randomized trials (CRTs), a common study design to test treatment effect by randomizing clusters or groups of individuals. Sample size determination in two-level CRTs requires knowledge of more than one design parameter, such as the effect size and the…
Descriptors: Sample Size, Bayesian Statistics, Randomized Controlled Trials, Research Design
Peter Schochet – Society for Research on Educational Effectiveness, 2021
Background: When RCTs are not feasible and time series data are available, panel data methods can be used to estimate treatment effects on outcomes, by exploiting variation in policies and conditions over time and across locations. A complication with these methods, however, is that treatment timing often varies across the sample, for example, due…
Descriptors: Statistical Analysis, Computation, Randomized Controlled Trials, COVID-19
Kelly Hallberg; Andrew Swanlund; Ryan Williams – Society for Research on Educational Effectiveness, 2021
Background: The COVID-19 pandemic and the subsequent public health response led to an unprecedented disruption in educational instruction in the U.S. and around the world. Many schools quickly moved to virtual learning for the bulk of the 2020 spring term and many states cancelled annual assessments of student learning. The 2020-21 school year…
Descriptors: Research Problems, Educational Research, Research Design, Randomized Controlled Trials
Kristin Porter; Luke Miratrix; Kristen Hunter – Society for Research on Educational Effectiveness, 2021
Background: Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs)…
Descriptors: Statistical Analysis, Hypothesis Testing, Computer Software, Randomized Controlled Trials
Bowden, A. Brooks – Society for Research on Educational Effectiveness, 2017
Initiatives during the Bush Administration and the Obama Administration may have set the stage for a"Golden Age of evidence-based policy" (Haskins, 2015). Together these efforts stress the importance of accurate, internally valid evidence that can inform decisions to more efficiently allocate public resources. In 2002, the U.S.…
Descriptors: Research Design, Costs, Randomized Controlled Trials, Educational Research
Claire Allen-Platt; Clara-Christina Gerstner; Robert Boruch; Alan Ruby – Society for Research on Educational Effectiveness, 2021
Background/Context: When a researcher tests an educational program, product, or policy in a randomized controlled trial (RCT) and detects a significant effect on an outcome, the intervention is usually classified as something that "works." When the expected effects are not found, however, there is seldom an orderly and transparent…
Descriptors: Educational Assessment, Randomized Controlled Trials, Evidence, Educational Research
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2017
The purpose of this paper is to present results of recent advances in power analyses to detect the moderator effects in Cluster Randomized Trials (CRTs). This paper focus on demonstration of the software PowerUp!-Moderator. This paper provides a resource for researchers seeking to design CRTs with adequate power to detect the moderator effects of…
Descriptors: Computer Software, Research Design, Randomized Controlled Trials, Statistical Analysis
Deke, John; Wei, Thomas; Kautz, Tim – Society for Research on Educational Effectiveness, 2018
Evaluators of education interventions increasingly need to design studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." For example, an evaluation of Response to Intervention from the Institute of Education Sciences (IES) detected impacts ranging from 0.13 to 0.17 standard…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Hedberg, E. C.; Hedges, L. V.; Kuyper, A. M. – Society for Research on Educational Effectiveness, 2015
Randomized experiments are generally considered to provide the strongest basis for causal inferences about cause and effect. Consequently randomized field trials have been increasingly used to evaluate the effects of education interventions, products, and services. Populations of interest in education are often hierarchically structured (such as…
Descriptors: Randomized Controlled Trials, Hierarchical Linear Modeling, Correlation, Computation