NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Teachers1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Brannick, Michael T.; French, Kimberly A.; Rothstein, Hannah R.; Kiselica, Andrew M.; Apostoloski, Nenad – Research Synthesis Methods, 2021
Tolerance intervals provide a bracket intended to contain a percentage (e.g., 80%) of a population distribution given sample estimates of the mean and variance. In random-effects meta-analysis, tolerance intervals should contain researcher-specified proportions of underlying population effect sizes. Using Monte Carlo simulation, we investigated…
Descriptors: Meta Analysis, Credibility, Intervals, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Walters, Glenn D. – International Journal of Social Research Methodology, 2019
Identifying mediators in variable chains as part of a causal mediation analysis can shed light on issues of causation, assessment, and intervention. However, coefficients and effect sizes in a causal mediation analysis are nearly always small. This can lead those less familiar with the approach to reject the results of causal mediation analysis.…
Descriptors: Effect Size, Statistical Analysis, Sampling, Statistical Inference
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Banjanovic, Erin S.; Osborne, Jason W. – Practical Assessment, Research & Evaluation, 2016
Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…
Descriptors: Computation, Statistical Analysis, Effect Size, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Beasley, T. Mark – Journal of Experimental Education, 2014
Increasing the correlation between the independent variable and the mediator ("a" coefficient) increases the effect size ("ab") for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation caused by…
Descriptors: Statistical Analysis, Effect Size, Nonparametric Statistics, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Stapleton, Laura M.; Pituch, Keenan A.; Dion, Eric – Journal of Experimental Education, 2015
This article presents 3 standardized effect size measures to use when sharing results of an analysis of mediation of treatment effects for cluster-randomized trials. The authors discuss 3 examples of mediation analysis (upper-level mediation, cross-level mediation, and cross-level mediation with a contextual effect) with demonstration of the…
Descriptors: Effect Size, Measurement Techniques, Statistical Analysis, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim – Journal of Experimental Education, 2014
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Descriptors: Effect Size, Statistical Bias, Sample Size, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes; French, Brian F. – Educational and Psychological Measurement, 2012
Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…
Descriptors: Computation, Statistical Analysis, Effect Size, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J. – New Directions for Evaluation, 2013
The goal of this chapter is to recommend quality criteria to guide evaluators' selections of sampling designs when mixing approaches. First, we contextualize our discussion of quality criteria and sampling designs by discussing the concept of interpretive consistency and how it impacts sampling decisions. Embedded in this discussion are…
Descriptors: Sampling, Mixed Methods Research, Evaluators, Q Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Ruscio, John; Gera, Benjamin Lee – Multivariate Behavioral Research, 2013
Researchers are strongly encouraged to accompany the results of statistical tests with appropriate estimates of effect size. For 2-group comparisons, a probability-based effect size estimator ("A") has many appealing properties (e.g., it is easy to understand, robust to violations of parametric assumptions, insensitive to outliers). We review…
Descriptors: Psychological Studies, Gender Differences, Researchers, Test Results
Peer reviewed Peer reviewed
Direct linkDirect link
Ruscio, John; Mullen, Tara – Multivariate Behavioral Research, 2012
It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…
Descriptors: Computation, Statistical Analysis, Probability, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Calzada, Maria E.; Gardner, Holly – Mathematics and Computer Education, 2011
The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10,…
Descriptors: Intervals, Effect Size, Simulation, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Fritz, Matthew S.; Taylor, Aaron B.; MacKinnon, David P. – Multivariate Behavioral Research, 2012
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special…
Descriptors: Statistical Analysis, Error of Measurement, Statistical Bias, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff – Career and Technical Education Research, 2012
Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…
Descriptors: Vocational Education, Effect Size, Intervals, Self Esteem
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Olsen, Robert B.; Unlu, Fatih; Price, Cristofer; Jaciw, Andrew P. – National Center for Education Evaluation and Regional Assistance, 2011
This report examines the differences in impact estimates and standard errors that arise when these are derived using state achievement tests only (as pre-tests and post-tests), study-administered tests only, or some combination of state- and study-administered tests. State tests may yield different evaluation results relative to a test that is…
Descriptors: Achievement Tests, Standardized Tests, State Standards, Reading Achievement
Peer reviewed Peer reviewed
Kirk, Roger E. – Educational and Psychological Measurement, 2001
Makes the case that science is best served when researchers focus on the size of effects and their practical significance. Advocates the use of confidence intervals for deciding whether chance or sampling variability is an unlikely explanation for an observed effect. Calls for more emphasis on effect sizes in the next edition of the American…
Descriptors: Effect Size, Hypothesis Testing, Psychology, Research Reports