NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Journal of Research on…27
Audience
Showing 1 to 15 of 27 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Luke Keele; Matthew Lenard; Lindsay Page – Journal of Research on Educational Effectiveness, 2024
In education settings, treatments are often non-randomly assigned to clusters, such as schools or classrooms, while outcomes are measured for students. This research design is called the clustered observational study (COS). We examine the consequences of common support violations in the COS context. Common support violations occur when the…
Descriptors: Intervention, Cluster Grouping, Observation, Catholic Schools
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Zuchao; Kelcey, Benjamin – Journal of Research on Educational Effectiveness, 2022
Optimal sampling frameworks attempt to identify the most efficient sampling plans to achieve an adequate statistical power. Although such calculations are theoretical in nature, they are critical to the judicious and wise use of funding because they serve as important starting points that guide practical discussions around sampling tradeoffs and…
Descriptors: Sampling, Research Design, Randomized Controlled Trials, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Dong, Nianbo; Maynarad, Rebecca; Spybrook, Jessaca; Kelcey, Ben – Journal of Research on Educational Effectiveness, 2023
Cluster randomized trials (CRTs) are commonly used to evaluate educational interventions, particularly their effectiveness. Recently there has been greater emphasis on using these trials to explore cost-effectiveness. However, methods for establishing the power of cluster randomized cost-effectiveness trials (CRCETs) are limited. This study…
Descriptors: Research Design, Statistical Analysis, Randomized Controlled Trials, Cost Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Kelcey, Ben; Spybrook, Jessaca; Dong, Nianbo; Bai, Fangxing – Journal of Research on Educational Effectiveness, 2020
Professional development for teachers is regarded as one of the principal pathways through which we can understand and cultivate effective teaching and improve student outcomes. A critical component of studies that seek to improve teaching through professional development is the detailed assessment of the intermediate teacher development processes…
Descriptors: Faculty Development, Educational Research, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Stallasch, Sophie E.; Lüdtke, Oliver; Artelt, Cordula; Brunner, Martin – Journal of Research on Educational Effectiveness, 2021
To plan cluster-randomized trials with sufficient statistical power to detect intervention effects on student achievement, researchers need multilevel design parameters, including measures of between-classroom and between-school differences and the amounts of variance explained by covariates at the student, classroom, and school level. Previous…
Descriptors: Foreign Countries, Randomized Controlled Trials, Intervention, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Westine, Carl D.; Unlu, Fatih; Taylor, Joseph; Spybrook, Jessaca; Zhang, Qi; Anderson, Brent – Journal of Research on Educational Effectiveness, 2020
Experimental research in education and training programs typically involves administering treatment to whole groups of individuals. As such, researchers rely on the estimation of design parameter values to conduct power analyses to efficiently plan their studies to detect desired effects. In this study, we present design parameter estimates from a…
Descriptors: Outcome Measures, Science Education, Mathematics Education, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Dong, Nianbo; Maynard, Rebecca – Journal of Research on Educational Effectiveness, 2013
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Descriptors: Effect Size, Sample Size, Research Design, Quasiexperimental Design
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca; Hedges, Larry; Borenstein, Michael – Journal of Research on Educational Effectiveness, 2014
Research designs in which clusters are the unit of randomization are quite common in the social sciences. Given the multilevel nature of these studies, the power analyses for these studies are more complex than in a simple individually randomized trial. Tools are now available to help researchers conduct power analyses for cluster randomized…
Descriptors: Statistical Analysis, Research Design, Vocabulary, Coding
Peer reviewed Peer reviewed
Direct linkDirect link
Brunner, Martin; Keller, Ulrich; Wenger, Marina; Fischbach, Antoine; Lüdtke, Oliver – Journal of Research on Educational Effectiveness, 2018
To plan group-randomized trials where treatment conditions are assigned to schools, researchers need design parameters that provide information about between-school differences in outcomes as well as the amount of variance that can be explained by covariates at the student (L1) and school (L2) levels. Most previous research has offered these…
Descriptors: Academic Achievement, Student Motivation, Psychological Patterns, Learning Strategies
Peer reviewed Peer reviewed
Direct linkDirect link
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P. – Journal of Research on Educational Effectiveness, 2016
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Descriptors: Educational Research, Generalization, Sampling, Participant Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2014
Recent publications have drawn attention to the idea of utilizing prior information about the correlation structure to improve statistical power in cluster randomized experiments. Because power in cluster randomized designs is a function of many different parameters, it has been difficult for applied researchers to discern a simple rule explaining…
Descriptors: Correlation, Statistical Analysis, Multivariate Analysis, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin – Journal of Research on Educational Effectiveness, 2017
The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…
Descriptors: Evaluation Research, Program Evaluation, Welfare Services, Employment
Peer reviewed Peer reviewed
Direct linkDirect link
Voight, Adam; Velez, Valerie – Journal of Research on Educational Effectiveness, 2018
This study employed a quasi-experimental design to examine the effects of a school-based youth participatory action research program on the education outcomes of participating high school students. The program was a year-long elective course in six high schools in the same California district whose student population is predominantly low-income…
Descriptors: High School Students, Participatory Research, Action Research, Student Research
Peer reviewed Peer reviewed
Direct linkDirect link
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon – Journal of Research on Educational Effectiveness, 2016
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
Descriptors: Educational Research, Research Design, Intervention, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Steiner, Peter M.; Cook, Thomas D.; Li, Wei; Clark, M. H. – Journal of Research on Educational Effectiveness, 2015
In observational studies, selection bias will be completely removed only if the selection mechanism is ignorable, namely, all confounders of treatment selection and potential outcomes are reliably measured. Ideally, well-grounded substantive theories about the selection process and outcome-generating model are used to generate the sample of…
Descriptors: Quasiexperimental Design, Bias, Selection, Observation
Previous Page | Next Page »
Pages: 1  |  2