NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kelcey, Benjamin; Dong, Nianbo; Spybrook, Jessaca; Cox, Kyle – Journal of Educational and Behavioral Statistics, 2017
Designs that facilitate inferences concerning both the total and indirect effects of a treatment potentially offer a more holistic description of interventions because they can complement "what works" questions with the comprehensive study of the causal connections implied by substantive theories. Mapping the sensitivity of designs to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Mediation Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
VanHoudnos, Nathan M.; Greenhouse, Joel B. – Journal of Educational and Behavioral Statistics, 2016
When cluster randomized experiments are analyzed as if units were independent, test statistics for treatment effects can be anticonservative. Hedges proposed a correction for such tests by scaling them to control their Type I error rate. This article generalizes the Hedges correction from a posttest-only experimental design to more common designs…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Error of Measurement, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Ahn, Soyeon; Becker, Betsy Jane – Journal of Educational and Behavioral Statistics, 2011
This paper examines the impact of quality-score weights in meta-analysis. A simulation examines the roles of study characteristics such as population effect size (ES) and its variance on the bias and mean square errors (MSEs) of the estimators for several patterns of relationship between quality and ES, and for specific patterns of systematic…
Descriptors: Meta Analysis, Scores, Effect Size, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V. – Journal of Educational and Behavioral Statistics, 2011
Research designs involving cluster randomization are becoming increasingly important in educational and behavioral research. Many of these designs involve two levels of clustering or nesting (students within classes and classes within schools). Researchers would like to compute effect size indexes based on the standardized mean difference to…
Descriptors: Effect Size, Research Design, Experiments, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Fan, Weihua; Hancock, Gregory R. – Journal of Educational and Behavioral Statistics, 2012
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
Descriptors: Robustness (Statistics), Hypothesis Testing, Monte Carlo Methods, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Aloe, Ariel M.; Becker, Betsy Jane – Journal of Educational and Behavioral Statistics, 2012
A new effect size representing the predictive power of an independent variable from a multiple regression model is presented. The index, denoted as r[subscript sp], is the semipartial correlation of the predictor with the outcome of interest. This effect size can be computed when multiple predictor variables are included in the regression model…
Descriptors: Meta Analysis, Effect Size, Multiple Regression Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Bing; Dalal, Siddhartha R.; McCaffrey, Daniel F. – Journal of Educational and Behavioral Statistics, 2012
There is widespread interest in using various statistical inference tools as a part of the evaluations for individual teachers and schools. Evaluation systems typically involve classifying hundreds or even thousands of teachers or schools according to their estimated performance. Many current evaluations are largely based on individual estimates…
Descriptors: Statistical Inference, Error of Measurement, Classification, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James – Journal of Educational and Behavioral Statistics, 2013
Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…
Descriptors: Accountability, Educational Research, Educational Testing, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Journal of Educational and Behavioral Statistics, 2009
The Mantel-Haenszel (MH) and logistic regression (LR) differential item functioning (DIF) procedures have inflated Type I error rates when there are large mean group differences, short tests, and large sample sizes.When there are large group differences in mean score, groups matched on the observed number-correct score differ on true score,…
Descriptors: Regression (Statistics), Test Bias, Error of Measurement, True Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Yuan, Ke-Hai; Maxwell, Scott – Journal of Educational and Behavioral Statistics, 2005
Retrospective or post hoc power analysis is recommended by reviewers and editors of many journals. Little literature has been found that gave a serious study of the post hoc power. When the sample size is large, the observed effect size is a good estimator of the true power. This article studies whether such a power estimator provides valuable…
Descriptors: Effect Size, Computation, Monte Carlo Methods, Bias