NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sanghyun Hong; W. Robert Reed – Research Synthesis Methods, 2024
This study builds on the simulation framework of a recent paper by Stanley and Doucouliagos ("Research Synthesis Methods" 2023;14;515--519). S&D use simulations to make the argument that meta-analyses using partial correlation coefficients (PCCs) should employ a "suboptimal" estimator of the PCC standard error when…
Descriptors: Meta Analysis, Correlation, Weighted Scores, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hening – Research Synthesis Methods, 2023
Many statistical methods (estimators) are available for estimating the consensus value (or average effect) and heterogeneity variance in interlaboratory studies or meta-analyses. These estimators are all valid because they are developed from or supported by certain statistical principles. However, no estimator can be perfect and must have error or…
Descriptors: Statistical Analysis, Computation, Measurement Techniques, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Sanghyun; Reed, W. Robert – Research Synthesis Methods, 2021
The purpose of this study is to show how Monte Carlo analysis of meta-analytic estimators can be used to select estimators for specific research situations. Our analysis conducts 1620 individual experiments, where each experiment is defined by a unique combination of sample size, effect size, effect size heterogeneity, publication selection…
Descriptors: Monte Carlo Methods, Meta Analysis, Research Methodology, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Rubio-Aparicio, María; López-López, José Antonio; Sánchez-Meca, Julio; Marín-Martínez, Fulgencio; Viechtbauer, Wolfgang; Van den Noortgate, Wim – Research Synthesis Methods, 2018
The random-effects model, applied in most meta-analyses nowadays, typically assumes normality of the distribution of the effect parameters. The purpose of this study was to examine the performance of various random-effects methods (standard method, Hartung's method, profile likelihood method, and bootstrapping) for computing an average effect size…
Descriptors: Effect Size, Meta Analysis, Intervals, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
López-López, José Antonio; Van den Noortgate, Wim; Tanner-Smith, Emily E.; Wilson, Sandra Jo; Lipsey, Mark W. – Research Synthesis Methods, 2017
Dependent effect sizes are ubiquitous in meta-analysis. Using Monte Carlo simulation, we compared the performance of 2 methods for meta-regression with dependent effect sizes--robust variance estimation (RVE) and 3-level modeling--with the standard meta-analytic method for independent effect sizes. We further compared bias-reduced linearization…
Descriptors: Effect Size, Regression (Statistics), Meta Analysis, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-hwane; Wang, Yan; Ferron, John M. – AERA Online Paper Repository, 2017
Multiple-baseline studies provide meta-analysts the opportunity to compute effect sizes based on either within-series comparisons of treatment phase to baseline phase observations, or time specific between-series comparisons of observations from those that have started treatment to observations of those that are still in baseline. The advantage of…
Descriptors: Meta Analysis, Effect Size, Hierarchical Linear Modeling, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2016
The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…
Descriptors: Hierarchical Linear Modeling, Monte Carlo Methods, Computation, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Nugent, William Robert; Moore, Matthew; Story, Erin – Educational and Psychological Measurement, 2015
The standardized mean difference (SMD) is perhaps the most important meta-analytic effect size. It is typically used to represent the difference between treatment and control population means in treatment efficacy research. It is also used to represent differences between populations with different characteristics, such as persons who are…
Descriptors: Error of Measurement, Error Correction, Predictor Variables, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Stanley, T. D.; Doucouliagos, Hristos – Research Synthesis Methods, 2014
Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with…
Descriptors: Regression (Statistics), Bias, Algebra, Mathematical Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio – Educational and Psychological Measurement, 2010
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
Descriptors: Meta Analysis, Sample Size, Effect Size, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jenson, William R.; Clark, Elaine; Kircher, John C.; Kristjansson, Sean D. – Psychology in the Schools, 2007
Evidence-based practice approaches to interventions has come of age and promises to provide a new standard of excellence for school psychologists. This article describes several definitions of evidence-based practice and the problems associated with traditional statistical analyses that rely on rejection of the null hypothesis for the…
Descriptors: School Psychologists, Statistical Analysis, Hypothesis Testing, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Bosch, Holger; Steinkamp, Fiona; Boller, Emil – Psychological Bulletin, 2006
Seance-room and other large-scale psychokinetic phenomena have fascinated humankind for decades. Experimental research has reduced these phenomena to attempts to influence (a) the fall of dice and, later, (b) the output of random number generators (RNGs). The meta-analysis combined 380 studies that assessed whether RNG output correlated with human…
Descriptors: Nonverbal Communication, Intention, Interaction, Meta Analysis
Lambert, Richard G.; Curlette, William L. – 1995
Validity generalization meta-analysis (VG) examines the extent to which the validity of an instrument can be transported across settings. VG offers correction and summarization procedures designed in part to remove the effects of statistical artifacts on estimates of association between criterion and predictor. By employing a random effects model,…
Descriptors: Correlation, Error of Measurement, Estimation (Mathematics), Meta Analysis
Peer reviewed Peer reviewed
Cornwell, John M.; Ladd, Robert T. – Educational and Psychological Measurement, 1993
Simulated data typical of those from meta analyses are used to evaluate the reliability, Type I and Type II errors, bias, and standard error of the meta-analytic procedures of Schmidt and Hunter (1977). Concerns about power, reliability, and Type I errors are presented. (SLD)
Descriptors: Bias, Computer Simulation, Correlation, Effect Size