NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Journal of Educational and…19
Audience
Location
Michigan1
Texas1
Laws, Policies, & Programs
Assessments and Surveys
Hopkins Symptom Checklist1
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Z. Schochet – Journal of Educational and Behavioral Statistics, 2025
Random encouragement designs evaluate treatments that aim to increase participation in a program or activity. These randomized controlled trials (RCTs) can also assess the mediated effects of participation itself on longer term outcomes using a complier average causal effect (CACE) estimation framework. This article considers power analysis…
Descriptors: Statistical Analysis, Computation, Causal Models, Research Design
Wendy Chan; Larry Vernon Hedges – Journal of Educational and Behavioral Statistics, 2022
Multisite field experiments using the (generalized) randomized block design that assign treatments to individuals within sites are common in education and the social sciences. Under this design, there are two possible estimands of interest and they differ based on whether sites or blocks have fixed or random effects. When the average treatment…
Descriptors: Research Design, Educational Research, Statistical Analysis, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Soojin; Esterling, Kevin M. – Journal of Educational and Behavioral Statistics, 2021
The causal mediation literature has developed techniques to assess the sensitivity of an inference to pretreatment confounding, but these techniques are limited to the case of a single mediator. In this article, we extend sensitivity analysis to possible violations of pretreatment confounding in the case of multiple mediators. In particular, we…
Descriptors: Statistical Analysis, Research Design, Influences, Anxiety
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Zuchao; Kelcey, Benjamin – Journal of Educational and Behavioral Statistics, 2020
Conventional optimal design frameworks consider a narrow range of sampling cost structures that thereby constrict their capacity to identify the most powerful and efficient designs. We relax several constraints of previous optimal design frameworks by allowing for variable sampling costs in cluster-randomized trials. The proposed framework…
Descriptors: Sampling, Research Design, Randomized Controlled Trials, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Edward; Gagnon-Bartsch, Johann A. – Journal of Educational and Behavioral Statistics, 2021
In paired experiments, participants are grouped into pairs with similar characteristics, and one observation from each pair is randomly assigned to treatment. The resulting treatment and control groups should be well-balanced; however, there may still be small chance imbalances. Building on work for completely randomized experiments, we propose a…
Descriptors: Experiments, Groups, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Soojin; Palardy, Gregory J. – Journal of Educational and Behavioral Statistics, 2020
Estimating the effects of randomized experiments and, by extension, their mediating mechanisms, is often complicated by treatment noncompliance. Two estimation methods for causal mediation in the presence of noncompliance have recently been proposed, the instrumental variable method (IV-mediate) and maximum likelihood method (ML-mediate). However,…
Descriptors: Computation, Compliance (Psychology), Maximum Likelihood Statistics, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Educational and Behavioral Statistics, 2017
Researchers designing multisite and cluster randomized trials of educational interventions will usually conduct a power analysis in the planning stage of the study. To conduct the power analysis, researchers often use estimates of intracluster correlation coefficients and effect sizes derived from an analysis of survey data. When there is…
Descriptors: Statistical Analysis, Hierarchical Linear Modeling, Surveys, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
VanHoudnos, Nathan M.; Greenhouse, Joel B. – Journal of Educational and Behavioral Statistics, 2016
When cluster randomized experiments are analyzed as if units were independent, test statistics for treatment effects can be anticonservative. Hedges proposed a correction for such tests by scaling them to control their Type I error rate. This article generalizes the Hedges correction from a posttest-only experimental design to more common designs…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Error of Measurement, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Gu, Fei; Preacher, Kristopher J.; Ferrer, Emilio – Journal of Educational and Behavioral Statistics, 2014
Mediation is a causal process that evolves over time. Thus, a study of mediation requires data collected throughout the process. However, most applications of mediation analysis use cross-sectional rather than longitudinal data. Another implicit assumption commonly made in longitudinal designs for mediation analysis is that the same mediation…
Descriptors: Statistical Analysis, Models, Research Design, Case Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Safarkhani, Maryam; Moerbeek, Mirjam – Journal of Educational and Behavioral Statistics, 2013
In a randomized controlled trial, a decision needs to be made about the total number of subjects for adequate statistical power. One way to increase the power of a trial is by including a predictive covariate in the model. In this article, the effects of various covariate adjustment strategies on increasing the power is studied for discrete-time…
Descriptors: Statistical Analysis, Scientific Methodology, Research Design, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Pustejovsky, James E.; Hedges, Larry V.; Shadish, William R. – Journal of Educational and Behavioral Statistics, 2014
In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general…
Descriptors: Hierarchical Linear Modeling, Effect Size, Maximum Likelihood Statistics, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Rietbergen, Charlotte; Moerbeek, Mirjam – Journal of Educational and Behavioral Statistics, 2011
The inefficiency induced by between-cluster variation in cluster randomized (CR) trials can be reduced by implementing a crossover (CO) design. In a simple CO trial, each subject receives each treatment in random order. A powerful characteristic of this design is that each subject serves as its own control. In a CR CO trial, clusters of subjects…
Descriptors: Research Design, Experimental Groups, Control Groups, Efficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher H. – Journal of Educational and Behavioral Statistics, 2011
Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…
Descriptors: Educational Research, Research Design, Effect Size, Experimental Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2013
In school-based randomized control trials (RCTs), a common design is to follow student cohorts over time. For such designs, education researchers usually focus on the place-based (PB) impact parameter, which is estimated using data collected on all students enrolled in the study schools at each data collection point. A potential problem with this…
Descriptors: Student Mobility, Scientific Methodology, Research Design, Intervention
Previous Page | Next Page ยป
Pages: 1  |  2