NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20251
Since 20242
Since 2021 (last 5 years)7
Since 2016 (last 10 years)26
Since 2006 (last 20 years)29
Laws, Policies, & Programs
Head Start1
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dahlia K. Remler; Gregg G. Van Ryzin – American Journal of Evaluation, 2025
This article reviews the origins and use of the terms quasi-experiment and natural experiment. It demonstrates how the terms conflate whether variation in the independent variable of interest falls short of random with whether researchers find, rather than intervene to create, that variation. Using the lens of assignment--the process driving…
Descriptors: Quasiexperimental Design, Research Design, Experiments, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Zuchao Shen; Ben Kelcey – Society for Research on Educational Effectiveness, 2023
I. Purpose of the Study: Detecting whether interventions work or not (through main effect analysis) can provide empirical evidence regarding the causal linkage between malleable factors (e.g., interventions) and learner outcomes. In complement, moderation analyses help delineate for whom and under what conditions intervention effects are most…
Descriptors: Intervention, Program Effectiveness, Evidence, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Xinhe Wang; Ben B. Hansen – Society for Research on Educational Effectiveness, 2024
Background: Clustered randomized controlled trials are commonly used to evaluate the effectiveness of treatments. Frequently, stratified or paired designs are adopted in practice. Fogarty (2018) studied variance estimators for stratified and not clustered experiments and Schochet et. al. (2022) studied that for stratified, clustered RCTs with…
Descriptors: Causal Models, Randomized Controlled Trials, Computation, Probability
Eric C. Hedberg – Grantee Submission, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
E. C. Hedberg – American Journal of Evaluation, 2023
In cluster randomized evaluations, a treatment or intervention is randomly assigned to a set of clusters each with constituent individual units of observations (e.g., student units that attend schools, which are assigned to treatment). One consideration of these designs is how many units are needed per cluster to achieve adequate statistical…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Xinran Li; Peng Ding; Donald B. Rubin – Grantee Submission, 2020
With many pretreatment covariates and treatment factors, the classical factorial experiment often fails to balance covariates across multiple factorial effects simultaneously. Therefore, it is intuitive to restrict the randomization of the treatment factors to satisfy certain covariate balance criteria, possibly conforming to the tiers of…
Descriptors: Experiments, Research Design, Randomized Controlled Trials, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Edward; Gagnon-Bartsch, Johann A. – Journal of Educational and Behavioral Statistics, 2021
In paired experiments, participants are grouped into pairs with similar characteristics, and one observation from each pair is randomly assigned to treatment. The resulting treatment and control groups should be well-balanced; however, there may still be small chance imbalances. Building on work for completely randomized experiments, we propose a…
Descriptors: Experiments, Groups, Research Design, Statistical Analysis
Benjamin A. Motz; Öykü Üner; Harmony E. Jankowski; Marcus A. Christie; Kim Burgas; Diego del Blanco Orobitg; Mark A. McDaniel – Grantee Submission, 2023
For researchers seeking to improve education, a common goal is to identify teaching practices that have causal benefits in classroom settings. To test whether an instructional practice exerts a causal influence on an outcome measure, the most straightforward and compelling method is to conduct an experiment. While experimentation is common in…
Descriptors: Learning Analytics, Experiments, Learning Processes, Learning Management Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Demby, Hilary; Jenner, Lynne; Gregory, Alethia; Jenner, Eric – American Journal of Evaluation, 2020
Despite the increase in federal tiered evidence initiatives that require the use of rigorous evaluation designs, such as randomized experiments, there has been limited guidance in the evaluation literature on practical strategies to implement such studies successfully. This paper provides lessons learned in executing experiments in applied…
Descriptors: Randomized Controlled Trials, Evaluation, Experiments, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Vellinga, Akke; Devine, Colum; Ho, Min Yun; Clarke, Colin; Leahy, Patrick; Bourke, Jane; Devane, Declan; Duane, Sinead; Kearney, Patricia – Research Ethics, 2020
Incentivising has shown to improve participation in clinical trials. However, ethical concerns suggest that incentives may be coercive, obscure trial risks and encourage individuals to enrol in clinical trials for the wrong reasons. The aim of our study was to develop and pilot a discrete choice experiment (DCE) to explore and identify preferences…
Descriptors: Patients, Value Judgment, Incentives, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Barnow, Burt S.; Greenberg, David H. – American Journal of Evaluation, 2020
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most…
Descriptors: Evaluation, Randomized Controlled Trials, Experiments, Replication (Evaluation)
Gagnon-Bartsch, J. A.; Sales, A. C.; Wu, E.; Botelho, A. F.; Erickson, J. A.; Miratrix, L. W.; Heffernan, N. T. – Grantee Submission, 2019
Randomized controlled trials (RCTs) admit unconfounded design-based inference--randomization largely justifies the assumptions underlying statistical effect estimates--but often have limited sample sizes. However, researchers may have access to big observational data on covariates and outcomes from RCT non-participants. For example, data from A/B…
Descriptors: Randomized Controlled Trials, Educational Research, Prediction, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, Gary – Harvard Educational Review, 2016
The past few years have seen a resurgence of faith in experimentation in education inquiry, and particularly in randomized controlled trials (RCTs). Proponents of such research have succeeded in bringing into common parlance the term "gold standard," which suggests that research emerging from any other design frame fails to achieve the…
Descriptors: Randomized Controlled Trials, Research Methodology, Educational Research, Best Practices
Ding Peng; Avi Feller; Luke Miratrix – Grantee Submission, 2016
Applied researchers are increasingly interested in whether and how treatment effects vary in randomized evaluations, especially variation not explained by observed covariates. We propose a model-free approach for testing for the presence of such unexplained variation. To use this randomization-based approach, we must address the fact that the…
Descriptors: Randomized Controlled Trials, Statistical Inference, Evaluation Methods, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Killion, Joellen – Journal of Staff Development, 2016
Teacher coaching is a powerful form of professional learning that improves teaching practices and student achievement, yet little is known about the specific aspects of coaching programs that are more effective. Researchers used a blocked randomized experiment to study the effects of one-to-one coaching on teacher practice. When pooled across all…
Descriptors: Teaching Methods, Tutors, Professional Development, Academic Achievement
Previous Page | Next Page »
Pages: 1  |  2