NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 45 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook – Journal of Experimental Education, 2024
Multisite cluster randomized trials (MCRTs), in which, the intermediate-level clusters (e.g., classrooms) are randomly assigned to the treatment or control condition within each site (e.g., school), are among the most commonly used experimental designs across a broad range of disciplines. MCRTs often align with the theory that programs are…
Descriptors: Research Design, Randomized Controlled Trials, Statistical Analysis, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook; Yanli Xie; Dung Pham; Peilin Qiu; Ning Sui – Grantee Submission, 2024
Multisite trials that randomize individuals (e.g., students) within sites (e.g., schools) or clusters (e.g., teachers/classrooms) within sites (e.g., schools) are commonly used for program evaluation because they provide opportunities to learn about treatment effects as well as their heterogeneity across sites and subgroups (defined by moderating…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Educational Research, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2024
Random encouragement designs are randomized controlled trials (RCTs) that test interventions aimed at increasing participation in a program or activity whose take up is not universal. In these RCTs, instead of randomizing individuals or clusters directly into treatment and control groups to participate in a program or activity, the randomization…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Yanli Xie – ProQuest LLC, 2022
The purpose of this dissertation is to develop principles and strategies for and identify limitations of multisite cluster randomized trials in the context of partially and fully nested designs. In the first study, I develop principles of estimation, sampling variability, and inference for studies that leverage multisite designs within the context…
Descriptors: Randomized Controlled Trials, Research Design, Computation, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Riley, Richard D.; Collins, Gary S.; Hattle, Miriam; Whittle, Rebecca; Ensor, Joie – Research Synthesis Methods, 2023
Before embarking on an individual participant data meta-analysis (IPDMA) project, researchers should consider the power of their planned IPDMA conditional on the studies promising their IPD and their characteristics. Such power estimates help inform whether the IPDMA project is worth the time and funding investment, before IPD are collected. Here,…
Descriptors: Computation, Meta Analysis, Participant Characteristics, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Konstantopoulos, Spyros – Educational and Psychological Measurement, 2023
Cluster randomized control trials often incorporate a longitudinal component where, for example, students are followed over time and student outcomes are measured repeatedly. Besides examining how intervention effects induce changes in outcomes, researchers are sometimes also interested in exploring whether intervention effects on outcomes are…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Longitudinal Studies, Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Dong, Nianbo; Maynarad, Rebecca; Spybrook, Jessaca; Kelcey, Ben – Journal of Research on Educational Effectiveness, 2023
Cluster randomized trials (CRTs) are commonly used to evaluate educational interventions, particularly their effectiveness. Recently there has been greater emphasis on using these trials to explore cost-effectiveness. However, methods for establishing the power of cluster randomized cost-effectiveness trials (CRCETs) are limited. This study…
Descriptors: Research Design, Statistical Analysis, Randomized Controlled Trials, Cost Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Journal of Research on Educational Effectiveness, 2023
Evidence-based education aims to support policy makers choosing between potential interventions. This rarely involves considering each in isolation; instead, sets of evidence regarding many potential policy interventions are considered. Filtering a set on any quantity measured with error risks the "winner's curse": conditional on…
Descriptors: Effect Size, Educational Research, Evidence Based Practice, Foreign Countries
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Journal of Educational and Behavioral Statistics, 2023
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Schochet – Society for Research on Educational Effectiveness, 2021
Background: When RCTs are not feasible and time series data are available, panel data methods can be used to estimate treatment effects on outcomes, by exploiting variation in policies and conditions over time and across locations. A complication with these methods, however, is that treatment timing often varies across the sample, for example, due…
Descriptors: Statistical Analysis, Computation, Randomized Controlled Trials, COVID-19
Peer reviewed Peer reviewed
Direct linkDirect link
Miratrix, Luke W.; Weiss, Michael J.; Henderson, Brit – Journal of Research on Educational Effectiveness, 2021
Researchers face many choices when conducting large-scale multisite individually randomized control trials. One of the most common quantities of interest in multisite RCTs is the overall average effect. Even this quantity is non-trivial to define and estimate. The researcher can target the average effect across individuals or sites. Furthermore,…
Descriptors: Computation, Randomized Controlled Trials, Error of Measurement, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Kristin Porter; Luke Miratrix; Kristen Hunter – Society for Research on Educational Effectiveness, 2021
Background: Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs)…
Descriptors: Statistical Analysis, Hypothesis Testing, Computer Software, Randomized Controlled Trials
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Grantee Submission, 2022
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Huang, Francis L. – Practical Assessment, Research & Evaluation, 2018
Among econometricians, instrumental variable (IV) estimation is a commonly used technique to estimate the causal effect of a particular variable on a specified outcome. However, among applied researchers in the social sciences, IV estimation may not be well understood. Although there are several IV estimation primers from different fields, most…
Descriptors: Computation, Statistical Analysis, Compliance (Psychology), Randomized Controlled Trials
Previous Page | Next Page ยป
Pages: 1  |  2  |  3