Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 20 |
Descriptor
Source
Journal of Research on… | 20 |
Author
Spybrook, Jessaca | 6 |
Dong, Nianbo | 3 |
Bloom, Howard S. | 2 |
Brunner, Martin | 2 |
Kelcey, Ben | 2 |
Lüdtke, Oliver | 2 |
Rhoads, Christopher | 2 |
Weiss, Michael J. | 2 |
Anderson, Brent | 1 |
Artelt, Cordula | 1 |
Bai, Fangxing | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Research | 17 |
Information Analyses | 2 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Education Level
Elementary Education | 2 |
Elementary Secondary Education | 2 |
Secondary Education | 2 |
Grade 4 | 1 |
Higher Education | 1 |
Intermediate Grades | 1 |
Audience
Location
Florida | 1 |
Germany | 1 |
New York | 1 |
North Carolina | 1 |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
Gates MacGinitie Reading Tests | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Luke Keele; Matthew Lenard; Lindsay Page – Journal of Research on Educational Effectiveness, 2024
In education settings, treatments are often non-randomly assigned to clusters, such as schools or classrooms, while outcomes are measured for students. This research design is called the clustered observational study (COS). We examine the consequences of common support violations in the COS context. Common support violations occur when the…
Descriptors: Intervention, Cluster Grouping, Observation, Catholic Schools
Shen, Zuchao; Kelcey, Benjamin – Journal of Research on Educational Effectiveness, 2022
Optimal sampling frameworks attempt to identify the most efficient sampling plans to achieve an adequate statistical power. Although such calculations are theoretical in nature, they are critical to the judicious and wise use of funding because they serve as important starting points that guide practical discussions around sampling tradeoffs and…
Descriptors: Sampling, Research Design, Randomized Controlled Trials, Statistical Analysis
Li, Wei; Dong, Nianbo; Maynarad, Rebecca; Spybrook, Jessaca; Kelcey, Ben – Journal of Research on Educational Effectiveness, 2023
Cluster randomized trials (CRTs) are commonly used to evaluate educational interventions, particularly their effectiveness. Recently there has been greater emphasis on using these trials to explore cost-effectiveness. However, methods for establishing the power of cluster randomized cost-effectiveness trials (CRCETs) are limited. This study…
Descriptors: Research Design, Statistical Analysis, Randomized Controlled Trials, Cost Effectiveness
Kelcey, Ben; Spybrook, Jessaca; Dong, Nianbo; Bai, Fangxing – Journal of Research on Educational Effectiveness, 2020
Professional development for teachers is regarded as one of the principal pathways through which we can understand and cultivate effective teaching and improve student outcomes. A critical component of studies that seek to improve teaching through professional development is the detailed assessment of the intermediate teacher development processes…
Descriptors: Faculty Development, Educational Research, Randomized Controlled Trials, Research Design
Stallasch, Sophie E.; Lüdtke, Oliver; Artelt, Cordula; Brunner, Martin – Journal of Research on Educational Effectiveness, 2021
To plan cluster-randomized trials with sufficient statistical power to detect intervention effects on student achievement, researchers need multilevel design parameters, including measures of between-classroom and between-school differences and the amounts of variance explained by covariates at the student, classroom, and school level. Previous…
Descriptors: Foreign Countries, Randomized Controlled Trials, Intervention, Educational Research
Westine, Carl D.; Unlu, Fatih; Taylor, Joseph; Spybrook, Jessaca; Zhang, Qi; Anderson, Brent – Journal of Research on Educational Effectiveness, 2020
Experimental research in education and training programs typically involves administering treatment to whole groups of individuals. As such, researchers rely on the estimation of design parameter values to conduct power analyses to efficiently plan their studies to detect desired effects. In this study, we present design parameter estimates from a…
Descriptors: Outcome Measures, Science Education, Mathematics Education, Intervention
Spybrook, Jessaca; Hedges, Larry; Borenstein, Michael – Journal of Research on Educational Effectiveness, 2014
Research designs in which clusters are the unit of randomization are quite common in the social sciences. Given the multilevel nature of these studies, the power analyses for these studies are more complex than in a simple individually randomized trial. Tools are now available to help researchers conduct power analyses for cluster randomized…
Descriptors: Statistical Analysis, Research Design, Vocabulary, Coding
Brunner, Martin; Keller, Ulrich; Wenger, Marina; Fischbach, Antoine; Lüdtke, Oliver – Journal of Research on Educational Effectiveness, 2018
To plan group-randomized trials where treatment conditions are assigned to schools, researchers need design parameters that provide information about between-school differences in outcomes as well as the amount of variance that can be explained by covariates at the student (L1) and school (L2) levels. Most previous research has offered these…
Descriptors: Academic Achievement, Student Motivation, Psychological Patterns, Learning Strategies
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P. – Journal of Research on Educational Effectiveness, 2016
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Descriptors: Educational Research, Generalization, Sampling, Participant Characteristics
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2014
Recent publications have drawn attention to the idea of utilizing prior information about the correlation structure to improve statistical power in cluster randomized experiments. Because power in cluster randomized designs is a function of many different parameters, it has been difficult for applied researchers to discern a simple rule explaining…
Descriptors: Correlation, Statistical Analysis, Multivariate Analysis, Research Design
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon – Journal of Research on Educational Effectiveness, 2016
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
Descriptors: Educational Research, Research Design, Intervention, Statistical Analysis
Dong, Nianbo; Maynard, Rebecca – Journal of Research on Educational Effectiveness, 2013
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Descriptors: Effect Size, Sample Size, Research Design, Quasiexperimental Design
Reardon, Sean F.; Robinson, Joseph P. – Journal of Research on Educational Effectiveness, 2012
In the absence of a randomized control trial, regression discontinuity (RD) designs can produce plausible estimates of the treatment effect on an outcome for individuals near a cutoff score. In the standard RD design, individuals with rating scores higher than some exogenously determined cutoff score are assigned to one treatment condition; those…
Descriptors: Regression (Statistics), Research Design, Cutting Scores, Computation
Weiss, Michael J.; Bloom, Howard S.; Verbitsky-Savitz, Natalya; Gupta, Himani; Vigil, Alma E.; Cullinan, Daniel N. – Journal of Research on Educational Effectiveness, 2017
Multisite trials, in which individuals are randomly assigned to alternative treatment arms within sites, offer an excellent opportunity to estimate the cross-site average effect of treatment assignment (intent to treat or ITT) "and" the amount by which this impact varies across sites. Although both of these statistics are substantively…
Descriptors: Randomized Controlled Trials, Evidence, Models, Intervention
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2016
Experimental evaluations that involve the educational system usually involve a hierarchical structure (students are nested within classrooms that are nested within schools, etc.). Concerns about contamination, where research subjects receive certain features of an intervention intended for subjects in a different experimental group, have often led…
Descriptors: Educational Experiments, Error of Measurement, Research Design, Statistical Analysis
Previous Page | Next Page »
Pages: 1 | 2