Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 19 |
Since 2006 (last 20 years) | 29 |
Descriptor
Research Design | 29 |
Statistical Analysis | 17 |
Randomized Controlled Trials | 12 |
Educational Research | 11 |
Effect Size | 11 |
Sample Size | 10 |
Intervention | 9 |
Regression (Statistics) | 9 |
Computation | 6 |
Control Groups | 6 |
Correlation | 6 |
More ▼ |
Source
Journal of Research on… | 29 |
Author
Spybrook, Jessaca | 5 |
Bloom, Howard S. | 3 |
Dong, Nianbo | 3 |
Bloom, Howard | 2 |
Brunner, Martin | 2 |
Jacob, Robin | 2 |
Kelcey, Ben | 2 |
Lüdtke, Oliver | 2 |
Reardon, Sean F. | 2 |
Tipton, Elizabeth | 2 |
Unlu, Fatih | 2 |
More ▼ |
Publication Type
Journal Articles | 29 |
Reports - Research | 29 |
Information Analyses | 2 |
Education Level
Elementary Education | 7 |
Secondary Education | 6 |
Grade 4 | 3 |
Intermediate Grades | 3 |
Middle Schools | 3 |
Elementary Secondary Education | 2 |
Grade 3 | 2 |
Junior High Schools | 2 |
Early Childhood Education | 1 |
Grade 2 | 1 |
Grade 5 | 1 |
More ▼ |
Audience
Researchers | 1 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 2 |
Elementary and Secondary… | 1 |
Assessments and Surveys
Comprehensive Tests of Basic… | 1 |
Gates MacGinitie Reading Tests | 1 |
National Assessment of… | 1 |
Program for International… | 1 |
Stanford Achievement Tests | 1 |
What Works Clearinghouse Rating
Does not meet standards | 1 |
Luke Keele; Matthew Lenard; Lindsay Page – Journal of Research on Educational Effectiveness, 2024
In education settings, treatments are often non-randomly assigned to clusters, such as schools or classrooms, while outcomes are measured for students. This research design is called the clustered observational study (COS). We examine the consequences of common support violations in the COS context. Common support violations occur when the…
Descriptors: Intervention, Cluster Grouping, Observation, Catholic Schools
Shen, Zuchao; Kelcey, Benjamin – Journal of Research on Educational Effectiveness, 2022
Optimal sampling frameworks attempt to identify the most efficient sampling plans to achieve an adequate statistical power. Although such calculations are theoretical in nature, they are critical to the judicious and wise use of funding because they serve as important starting points that guide practical discussions around sampling tradeoffs and…
Descriptors: Sampling, Research Design, Randomized Controlled Trials, Statistical Analysis
Li, Wei; Dong, Nianbo; Maynarad, Rebecca; Spybrook, Jessaca; Kelcey, Ben – Journal of Research on Educational Effectiveness, 2023
Cluster randomized trials (CRTs) are commonly used to evaluate educational interventions, particularly their effectiveness. Recently there has been greater emphasis on using these trials to explore cost-effectiveness. However, methods for establishing the power of cluster randomized cost-effectiveness trials (CRCETs) are limited. This study…
Descriptors: Research Design, Statistical Analysis, Randomized Controlled Trials, Cost Effectiveness
Xu, Menglin; Logan, Jessica A. R. – Journal of Research on Educational Effectiveness, 2021
Planned missing data designs allow researchers to have highly-powered studies by testing only a fraction of the traditional sample size. In two-method measurement planned missingness designs, researchers assess only part of the sample on a high-quality expensive measure, while the entire sample is given a more inexpensive, but biased measure. The…
Descriptors: Longitudinal Studies, Research Design, Research Problems, Structural Equation Models
Deke, John; Wei, Thomas; Kautz, Tim – Journal of Research on Educational Effectiveness, 2021
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may…
Descriptors: Intervention, Program Evaluation, Sample Size, Randomized Controlled Trials
Chan, Wendy; Oh, Jimin; Luo, Peihao – Journal of Research on Educational Effectiveness, 2021
Findings from experimental studies have increasingly been used to inform policy in school settings. Thus far, the populations in many of these studies are typically defined in a cross-sectional context; namely, the populations are defined in the same academic year in which the study took place or the population is defined at a fixed time point.…
Descriptors: Generalization, Research Design, Demography, Case Studies
Kowalski, Susan M.; Taylor, Joseph A.; Askinas, Karen M.; Wang, Qian; Zhang, Qi; Maddix, William P.; Tipton, Elizabeth – Journal of Research on Educational Effectiveness, 2020
Developing and maintaining a high-quality science teaching corps has become increasingly urgent with standards that require students to move beyond mastering facts to reasoning and arguing from evidence. "Effective" professional development (PD) for science teachers enhances teacher outcomes and, in turn, enhances primary and secondary…
Descriptors: Effect Size, Faculty Development, Science Teachers, Program Effectiveness
Kelcey, Ben; Spybrook, Jessaca; Dong, Nianbo; Bai, Fangxing – Journal of Research on Educational Effectiveness, 2020
Professional development for teachers is regarded as one of the principal pathways through which we can understand and cultivate effective teaching and improve student outcomes. A critical component of studies that seek to improve teaching through professional development is the detailed assessment of the intermediate teacher development processes…
Descriptors: Faculty Development, Educational Research, Randomized Controlled Trials, Research Design
Gehlbach, Hunter; Robinson, Carly D. – Journal of Research on Educational Effectiveness, 2018
Like performance-enhancing drugs inflating apparent athletic achievements, several common social science practices contribute to the production of illusory results. In this article, we examine the processes that lead to illusory findings and describe their consequences. We borrow from an approach used increasingly by other disciplines--the norm of…
Descriptors: Educational Research, Research Methodology, Research Reports, Hypothesis Testing
Stallasch, Sophie E.; Lüdtke, Oliver; Artelt, Cordula; Brunner, Martin – Journal of Research on Educational Effectiveness, 2021
To plan cluster-randomized trials with sufficient statistical power to detect intervention effects on student achievement, researchers need multilevel design parameters, including measures of between-classroom and between-school differences and the amounts of variance explained by covariates at the student, classroom, and school level. Previous…
Descriptors: Foreign Countries, Randomized Controlled Trials, Intervention, Educational Research
Bloom, Howard; Bell, Andrew; Reiman, Kayla – Journal of Research on Educational Effectiveness, 2020
This article assesses the likely generalizability of educational treatment-effect estimates from regression discontinuity designs (RDDs) when treatment assignment is based on academic pretest scores. Our assessment uses data on outcome and pretest measures from six educational experiments, ranging from preschool through high school, to estimate…
Descriptors: Data Use, Randomized Controlled Trials, Research Design, Regression (Statistics)
Westine, Carl D.; Unlu, Fatih; Taylor, Joseph; Spybrook, Jessaca; Zhang, Qi; Anderson, Brent – Journal of Research on Educational Effectiveness, 2020
Experimental research in education and training programs typically involves administering treatment to whole groups of individuals. As such, researchers rely on the estimation of design parameter values to conduct power analyses to efficiently plan their studies to detect desired effects. In this study, we present design parameter estimates from a…
Descriptors: Outcome Measures, Science Education, Mathematics Education, Intervention
Brunner, Martin; Keller, Ulrich; Wenger, Marina; Fischbach, Antoine; Lüdtke, Oliver – Journal of Research on Educational Effectiveness, 2018
To plan group-randomized trials where treatment conditions are assigned to schools, researchers need design parameters that provide information about between-school differences in outcomes as well as the amount of variance that can be explained by covariates at the student (L1) and school (L2) levels. Most previous research has offered these…
Descriptors: Academic Achievement, Student Motivation, Psychological Patterns, Learning Strategies
Porter, Kristin E.; Reardon, Sean F.; Unlu, Fatih; Bloom, Howard S.; Cimpian, Joseph R. – Journal of Research on Educational Effectiveness, 2017
A valuable extension of the single-rating regression discontinuity design (RDD) is a multiple-rating RDD (MRRDD). To date, four main methods have been used to estimate average treatment effects at the multiple treatment frontiers of an MRRDD: the "surface" method, the "frontier" method, the "binding-score" method, and…
Descriptors: Regression (Statistics), Intervention, Quasiexperimental Design, Simulation
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P. – Journal of Research on Educational Effectiveness, 2016
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Descriptors: Educational Research, Generalization, Sampling, Participant Characteristics
Previous Page | Next Page »
Pages: 1 | 2