Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 10 |
Descriptor
Effect Size | 13 |
Models | 11 |
Meta Analysis | 6 |
Statistical Analysis | 6 |
Monte Carlo Methods | 5 |
Computation | 4 |
Error of Measurement | 4 |
Correlation | 3 |
Robustness (Statistics) | 3 |
Sample Size | 3 |
Educational Assessment | 2 |
More ▼ |
Source
Journal of Educational and… | 13 |
Author
Allen, Jeff | 1 |
Aloe, Ariel M. | 1 |
Becker, Betsy Jane | 1 |
Benjamin W. Domingue | 1 |
Camilli, Gregory | 1 |
Chiu, Chia-Yi | 1 |
Cox, Kyle | 1 |
Dalal, Siddhartha R. | 1 |
Dong, Nianbo | 1 |
Fan, Weihua | 1 |
Han, Bing | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 8 |
Reports - Evaluative | 3 |
Reports - Descriptive | 2 |
Education Level
Elementary Education | 2 |
Elementary Secondary Education | 2 |
Higher Education | 2 |
Postsecondary Education | 2 |
Early Childhood Education | 1 |
Grade 2 | 1 |
Primary Education | 1 |
Audience
Location
Pennsylvania | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Vembye, Mikkel Helding; Pustejovsky, James Eric; Pigott, Therese Deocampo – Journal of Educational and Behavioral Statistics, 2023
Meta-analytic models for dependent effect sizes have grown increasingly sophisticated over the last few decades, which has created challenges for a priori power calculations. We introduce power approximations for tests of average effect sizes based upon several common approaches for handling dependent effect sizes. In a Monte Carlo simulation, we…
Descriptors: Meta Analysis, Robustness (Statistics), Statistical Analysis, Models
Joshua B. Gilbert; Luke W. Miratrix; Mridul Joshi; Benjamin W. Domingue – Journal of Educational and Behavioral Statistics, 2025
Analyzing heterogeneous treatment effects (HTEs) plays a crucial role in understanding the impacts of educational interventions. A standard practice for HTE analysis is to examine interactions between treatment status and preintervention participant characteristics, such as pretest scores, to identify how different groups respond to treatment.…
Descriptors: Causal Models, Item Response Theory, Statistical Inference, Psychometrics
Kelcey, Benjamin; Dong, Nianbo; Spybrook, Jessaca; Cox, Kyle – Journal of Educational and Behavioral Statistics, 2017
Designs that facilitate inferences concerning both the total and indirect effects of a treatment potentially offer a more holistic description of interventions because they can complement "what works" questions with the comprehensive study of the causal connections implied by substantive theories. Mapping the sensitivity of designs to…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Mediation Theory, Models
Fan, Weihua; Hancock, Gregory R. – Journal of Educational and Behavioral Statistics, 2012
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
Descriptors: Robustness (Statistics), Hypothesis Testing, Monte Carlo Methods, Simulation
Aloe, Ariel M.; Becker, Betsy Jane – Journal of Educational and Behavioral Statistics, 2012
A new effect size representing the predictive power of an independent variable from a multiple regression model is presented. The index, denoted as r[subscript sp], is the semipartial correlation of the predictor with the outcome of interest. This effect size can be computed when multiple predictor variables are included in the regression model…
Descriptors: Meta Analysis, Effect Size, Multiple Regression Analysis, Models
Han, Bing; Dalal, Siddhartha R.; McCaffrey, Daniel F. – Journal of Educational and Behavioral Statistics, 2012
There is widespread interest in using various statistical inference tools as a part of the evaluations for individual teachers and schools. Evaluation systems typically involve classifying hundreds or even thousands of teachers or schools according to their estimated performance. Many current evaluations are largely based on individual estimates…
Descriptors: Statistical Inference, Error of Measurement, Classification, Statistical Analysis
Karl, Andrew T.; Yang, Yan; Lohr, Sharon L. – Journal of Educational and Behavioral Statistics, 2013
Value-added models have been widely used to assess the contributions of individual teachers and schools to students' academic growth based on longitudinal student achievement outcomes. There is concern, however, that ignoring the presence of missing values, which are common in longitudinal studies, can bias teachers' value-added scores.…
Descriptors: Evaluation Methods, Teacher Effectiveness, Academic Achievement, Achievement Gains
Camilli, Gregory; de la Torre, Jimmy; Chiu, Chia-Yi – Journal of Educational and Behavioral Statistics, 2010
In this article, three multilevel models for meta-analysis are examined. Hedges and Olkin suggested that effect sizes follow a noncentral "t" distribution and proposed several approximate methods. Raudenbush and Bryk further refined this model; however, this procedure is based on a normal approximation. In the current research literature, this…
Descriptors: Markov Processes, Effect Size, Meta Analysis, Monte Carlo Methods
Ho, Andrew Dean – Journal of Educational and Behavioral Statistics, 2009
Problems of scale typically arise when comparing test score trends, gaps, and gap trends across different tests. To overcome some of these difficulties, test score distributions on the same score scale can be represented by nonparametric graphs or statistics that are invariant under monotone scale transformations. This article motivates and then…
Descriptors: Nonparametric Statistics, Comparative Analysis, Trend Analysis, Scores
Allen, Jeff; Le, Huy – Journal of Educational and Behavioral Statistics, 2008
Users of logistic regression models often need to describe the overall predictive strength, or effect size, of the model's predictors. Analogs of R[superscript 2] have been developed, but none of these measures are interpretable on the same scale as effects of individual predictors. Furthermore, R[superscript 2] analogs are not invariant to the…
Descriptors: Regression (Statistics), Effect Size, Measurement, Models

Timm, Neil H. – Journal of Educational and Behavioral Statistics, 2002
Shows how to test the hypothesis that a nonnested model fits a set of predictors when modeling multiple effect sizes in meta-analysis. Illustrates the procedure using data from previous studies of the effectiveness of coaching on performance on the Scholastic Aptitude Test. (SLD)
Descriptors: Effect Size, Meta Analysis, Models, Multivariate Analysis
Viechtbauer, Wolfgang – Journal of Educational and Behavioral Statistics, 2005
The meta-analytic random effects model assumes that the variability in effect size estimates drawn from a set of studies can be decomposed into two parts: heterogeneity due to random population effects and sampling variance. In this context, the usual goal is to estimate the central tendency and the amount of heterogeneity in the population effect…
Descriptors: Bias, Meta Analysis, Models, Effect Size

Hedges, Larry V.; Vevea, Jack L. – Journal of Educational and Behavioral Statistics, 1996
A selection model for meta-analysis is proposed that models the selection process and corrects for the consequences of selection by publication on estimates of the mean and variance of the effect parameters. Simulation studies show that the model substantially reduces bias when the model specification is correct. (SLD)
Descriptors: Effect Size, Estimation (Mathematics), Meta Analysis, Models