NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Joseph Taylor; Dung Pham; Paige Whitney; Jonathan Hood; Lamech Mbise; Qi Zhang; Jessaca Spybrook – Society for Research on Educational Effectiveness, 2023
Background: Power analyses for a cluster-randomized trial (CRT) require estimates of additional design parameters beyond those needed for an individually randomized trial. In a 2-level CRT, there are two sample sizes, the number of clusters and the number of individuals per cluster. The intraclass correlation (ICC), or the proportion of variance…
Descriptors: Statistical Analysis, Multivariate Analysis, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Garret J. Hall; Sophia Putzeys; Thomas R. Kratochwill; Joel R. Levin – Educational Psychology Review, 2024
Single-case experimental designs (SCEDs) have a long history in clinical and educational disciplines. One underdeveloped area in advancing SCED design and analysis is understanding the process of how internal validity threats and operational concerns are avoided or mitigated. Two strategies to ameliorate such issues in SCED involve replication and…
Descriptors: Research Design, Graphs, Case Studies, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Wei Li; Yanli Xie; Dung Pham; Nianbo Dong; Jessaca Spybrook; Benjamin Kelcey – Asia Pacific Education Review, 2024
Cluster randomized trials (CRTs) are commonly used to evaluate the causal effects of educational interventions, where the entire clusters (e.g., schools) are randomly assigned to treatment or control conditions. This study introduces statistical methods for designing and analyzing two-level (e.g., students nested within schools) and three-level…
Descriptors: Research Design, Multivariate Analysis, Randomized Controlled Trials, Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Emma Law; Isabel Smith – Research Ethics, 2024
During the COVID-19 pandemic, the race to find an effective vaccine or treatment saw an 'extraordinary number' of clinical trials being conducted. While there were some key success stories, not all trials produced results that informed patient care. There was a significant amount of waste in clinical research during the pandemic which is said to…
Descriptors: Ethics, Research Methodology, Integrity, COVID-19
Peer reviewed Peer reviewed
Kenneth A. Frank; Qinyun Lin; Spiro J. Maroulis – Grantee Submission, 2024
In the complex world of educational policy, causal inferences will be debated. As we review non-experimental designs in educational policy, we focus on how to clarify and focus the terms of debate. We begin by presenting the potential outcomes/counterfactual framework and then describe approximations to the counterfactual generated from the…
Descriptors: Causal Models, Statistical Inference, Observation, Educational Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly Hallberg; Andrew Swanlund; Ryan Williams – Society for Research on Educational Effectiveness, 2021
Background: The COVID-19 pandemic and the subsequent public health response led to an unprecedented disruption in educational instruction in the U.S. and around the world. Many schools quickly moved to virtual learning for the bulk of the 2020 spring term and many states cancelled annual assessments of student learning. The 2020-21 school year…
Descriptors: Research Problems, Educational Research, Research Design, Randomized Controlled Trials
Spybrook, Jessaca; Zhang, Qi; Kelcey, Ben; Dong, Nianbo – Educational Evaluation and Policy Analysis, 2020
Over the past 15 years, we have seen an increase in the use of cluster randomized trials (CRTs) to test the efficacy of educational interventions. These studies are often designed with the goal of determining whether a program works, or answering the what works question. Recently, the goals of these studies expanded to include for whom and under…
Descriptors: Randomized Controlled Trials, Educational Research, Program Effectiveness, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Cartwright, Nancy – Educational Research and Evaluation, 2019
Across the evidence-based policy and practice (EBPP) community, including education, randomised controlled trials (RCTS) rank as the most "rigorous" evidence for causal conclusions. This paper argues that that is misleading. Only narrow conclusions about study populations can be warranted with the kind of "rigour" that RCTs…
Descriptors: Evidence Based Practice, Educational Policy, Randomized Controlled Trials, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Larry V. Hedges – Journal of Research on Educational Effectiveness, 2018
The scientific rigor of education research has improved dramatically since the year 2000. Much of the credit for this improvement is deserved by Institute of Education Sciences (IES) policies that helped create a demand for rigorous research; increased human capital capacity to carry out such work; provided funding for the work itself; and…
Descriptors: Educational Research, Generalization, Intervention, Human Capital
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Educational Research and Evaluation, 2018
Ainsworth et al.'s paper "Sources of Bias in Outcome Assessment in Randomised Controlled Trials: A Case Study" examines alternative accounts for a large difference in effect size between 2 outcomes in the same intervention evaluation. It argues that the probable explanation relates to masking: Only one outcome measure was administered by…
Descriptors: Statistical Bias, Randomized Controlled Trials, Effect Size, Outcome Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Taber, Keith S. – Studies in Science Education, 2019
Experimental studies are often employed to test the effectiveness of teaching innovations such as new pedagogy, curriculum, or learning resources. This article offers guidance on good practice in developing research designs, and in drawing conclusions from published reports. Random control trials potentially support the use of statistical…
Descriptors: Instructional Innovation, Educational Research, Research Design, Research Methodology
Larry V. Hedges – Grantee Submission, 2017
The scientific rigor of education research has improved dramatically since the year 2000. Much of the credit for this improvement is deserved by Institute of Education Sciences (IES) policies that helped create a demand for rigorous research; increased human capital capacity to carry out such work; provided funding for the work itself; and…
Descriptors: Educational Research, Generalization, Intervention, Human Capital
Peer reviewed Peer reviewed
Direct linkDirect link
Connolly, Paul; Keenan, Ciara; Urbanska, Karolina – Educational Research, 2018
Background: The use of randomised controlled trials (RCTs) in education has increased significantly over the last 15 years. However, their use has also been subject to sustained and rather trenchant criticism from significant sections of the education research community. Key criticisms have included the claims that: it is not possible to undertake…
Descriptors: Evidence Based Practice, Randomized Controlled Trials, Educational Research, Educational History
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Murphy, David; Oliver, Mary; Pourhabib, Sanam; Adkins, Michael; Hodgen, Jeremy – Education Endowment Foundation, 2017
This report examines the range of factors that might influence the decision by social care professionals on the use of boarding schools as an intervention option for Children in Need (CiN) or children on a Child Protection Plan (CPP). Attempts to conduct a randomised controlled trial (RCT) failed to recruit participants. Initially, failure to…
Descriptors: Boarding Schools, Caseworkers, Social Work, Intervention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Previous Page | Next Page »
Pages: 1  |  2