NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Individuals with Disabilities…1
Showing 46 to 60 of 613 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Peer reviewed Peer reviewed
Direct linkDirect link
Larry L. Orr; Robert B. Olsen; Stephen H. Bell; Ian Schmid; Azim Shivji; Elizabeth A. Stuart – Journal of Policy Analysis and Management, 2019
Evidence-based policy at the local level requires predicting the impact of an intervention to inform whether it should be adopted. Increasingly, local policymakers have access to published research evaluating the effectiveness of policy interventions from national research clearinghouses that review and disseminate evidence from program…
Descriptors: Educational Policy, Evidence Based Practice, Intervention, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Su, Yu-Xuan; Tu, Yu-Kang – Research Synthesis Methods, 2018
Network meta-analysis compares multiple treatments in terms of their efficacy and harm by including evidence from randomized controlled trials. Most clinical trials use parallel design, where patients are randomly allocated to different treatments and receive only 1 treatment. However, some trials use within person designs such as split-body,…
Descriptors: Network Analysis, Meta Analysis, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2018
Design-based methods have recently been developed as a way to analyze randomized controlled trial (RCT) data for designs with a single treatment and control group. This article builds on this framework to develop design-based estimators for evaluations with multiple research groups. Results are provided for a wide range of designs used in…
Descriptors: Randomized Controlled Trials, Computation, Educational Research, Experimental Groups
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter – Society for Research on Educational Effectiveness, 2018
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies (Freedman, 2008; Lin, 2013; Imbens and Rubin, 2015; Schochet, 2013, 2016; Yang and Tsiatis, 2001). The non-parametric estimators are derived using the building blocks of experimental designs with minimal…
Descriptors: Randomized Controlled Trials, Computation, Educational Research, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Jaylin Lowe; Charlotte Z. Mann; Jiaying Wang; Adam Sales; Johann A. Gagnon-Bartsch – Grantee Submission, 2024
Recent methods have sought to improve precision in randomized controlled trials (RCTs) by utilizing data from large observational datasets for covariate adjustment. For example, consider an RCT aimed at evaluating a new algebra curriculum, in which a few dozen schools are randomly assigned to treatment (new curriculum) or control (standard…
Descriptors: Randomized Controlled Trials, Middle School Mathematics, Middle School Students, Middle Schools
Peer reviewed Peer reviewed
Direct linkDirect link
Claire Allen-Platt; Clara-Christina Gerstner; Robert Boruch; Alan Ruby – Society for Research on Educational Effectiveness, 2021
Background/Context: When a researcher tests an educational program, product, or policy in a randomized controlled trial (RCT) and detects a significant effect on an outcome, the intervention is usually classified as something that "works." When the expected effects are not found, however, there is seldom an orderly and transparent…
Descriptors: Educational Assessment, Randomized Controlled Trials, Evidence, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2017
The purpose of this paper is to present results of recent advances in power analyses to detect the moderator effects in Cluster Randomized Trials (CRTs). This paper focus on demonstration of the software PowerUp!-Moderator. This paper provides a resource for researchers seeking to design CRTs with adequate power to detect the moderator effects of…
Descriptors: Computer Software, Research Design, Randomized Controlled Trials, Statistical Analysis
Sales, Adam C.; Hansen, Ben B. – Journal of Educational and Behavioral Statistics, 2020
Conventionally, regression discontinuity analysis contrasts a univariate regression's limits as its independent variable, "R," approaches a cut point, "c," from either side. Alternative methods target the average treatment effect in a small region around "c," at the cost of an assumption that treatment assignment,…
Descriptors: Regression (Statistics), Computation, Statistical Inference, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Dong, Nianbo; Kelcey, Benjamin; Spybrook, Jessaca – Journal of Experimental Education, 2018
Researchers are often interested in whether the effects of an intervention differ conditional on individual- or group-moderator variables such as children's characteristics (e.g., gender), teacher's background (e.g., years of teaching), and school's characteristics (e.g., urbanity); that is, the researchers seek to examine for whom and under what…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Intervention, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca; Kelcey, Benjamin; Dong, Nianbo – Journal of Educational and Behavioral Statistics, 2016
Recently, there has been an increase in the number of cluster randomized trials (CRTs) to evaluate the impact of educational programs and interventions. These studies are often powered for the main effect of treatment to address the "what works" question. However, program effects may vary by individual characteristics or by context,…
Descriptors: Randomized Controlled Trials, Statistical Analysis, Computation, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Stallasch, Sophie E.; Lüdtke, Oliver; Artelt, Cordula; Brunner, Martin – Journal of Research on Educational Effectiveness, 2021
To plan cluster-randomized trials with sufficient statistical power to detect intervention effects on student achievement, researchers need multilevel design parameters, including measures of between-classroom and between-school differences and the amounts of variance explained by covariates at the student, classroom, and school level. Previous…
Descriptors: Foreign Countries, Randomized Controlled Trials, Intervention, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Zuchao; Kelcey, Benjamin; Cox, Kyle T.; Zhang, Jiaqi – AERA Online Paper Repository, 2017
Recent studies show cluster randomized trials may be well powered to detect mediation or indirect effects in multilevel settings. However, literature has rarely provided guidance on designing cluster-randomized trials aim to assess indirect effects. In this study, we developed closed-form expression to estimate the variance of and the statistical…
Descriptors: Randomized Controlled Trials, Research Design, Context Effect, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Moerbeek, Mirjam; Safarkhani, Maryam – Journal of Educational and Behavioral Statistics, 2018
Data from cluster randomized trials do not always have a pure hierarchical structure. For instance, students are nested within schools that may be crossed by neighborhoods, and soldiers are nested within army units that may be crossed by mental health-care professionals. It is important that the random cross-classification is taken into account…
Descriptors: Randomized Controlled Trials, Classification, Research Methodology, Military Personnel
Yoon, HyeonJin – ProQuest LLC, 2018
In basic regression discontinuity (RD) designs, causal inference is limited to the local area near a single cutoff. To strengthen the generality of the RD treatment estimate, a design with multiple cutoffs along the assignment variable continuum can be applied. The availability of multiple cutoffs allows estimation of a pooled average treatment…
Descriptors: Regression (Statistics), Program Evaluation, Computation, Statistical Analysis
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  41