NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers7
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 44 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Matthew Forte; Elizabeth Tipton – Society for Research on Educational Effectiveness, 2024
Background/Context: Over the past twenty plus years, the What Works Clearinghouse (WWC) has reviewed over 1,700 studies, cataloging effect sizes for 189 interventions. Some 56% of these interventions include results from multiple, independent studies; on average, these include results of [approximately]3 studies, though some include as many as 32…
Descriptors: Meta Analysis, Sampling, Effect Size, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Brannick, Michael T.; French, Kimberly A.; Rothstein, Hannah R.; Kiselica, Andrew M.; Apostoloski, Nenad – Research Synthesis Methods, 2021
Tolerance intervals provide a bracket intended to contain a percentage (e.g., 80%) of a population distribution given sample estimates of the mean and variance. In random-effects meta-analysis, tolerance intervals should contain researcher-specified proportions of underlying population effect sizes. Using Monte Carlo simulation, we investigated…
Descriptors: Meta Analysis, Credibility, Intervals, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Schnell, Rainer; Thomas, Kathrin – Sociological Methods & Research, 2023
This article provides a meta-analysis of studies using the crosswise model (CM) in estimating the prevalence of sensitive characteristics in different samples and populations. On a data set of 141 items published in 33 either articles or books, we compare the difference ([delta]) between estimates based on the CM and a direct question (DQ). The…
Descriptors: Meta Analysis, Models, Comparative Analysis, Publications
Peer reviewed Peer reviewed
Direct linkDirect link
Stanley, T. D.; Doucouliagos, Hristos – Research Synthesis Methods, 2023
Partial correlation coefficients are often used as effect sizes in the meta-analysis and systematic review of multiple regression analysis research results. There are two well-known formulas for the variance and thereby for the standard error (SE) of partial correlation coefficients (PCC). One is considered the "correct" variance in the…
Descriptors: Correlation, Statistical Bias, Error Patterns, Error Correction
Peer reviewed Peer reviewed
Direct linkDirect link
Poom, Leo; af Wåhlberg, Anders – Research Synthesis Methods, 2022
In meta-analysis, effect sizes often need to be converted into a common metric. For this purpose conversion formulas have been constructed; some are exact, others are approximations whose accuracy has not yet been systematically tested. We performed Monte Carlo simulations where samples with pre-specified population correlations between the…
Descriptors: Meta Analysis, Effect Size, Mathematical Formulas, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Dennis, Minyi Shih; Sorrells, Audrey M.; Chovanes, Jacquelyn; Kiru, Elisheba W. – Learning Disability Quarterly, 2022
This meta-analysis examined the ecological and population validity of intervention research for students with low mathematics achievement (SWLMA). Forty-four studies published between 2005 and 2019 that met the inclusionary criterion were included in this analysis. Our findings suggest, to improve the external validity and generalizability of…
Descriptors: Mathematics Achievement, Low Achievement, Intervention, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Senior, Alistair M.; Viechtbauer, Wolfgang; Nakagawa, Shinichi – Research Synthesis Methods, 2020
Meta-analyses are often used to estimate the relative average values of a quantitative outcome in two groups (eg, control and experimental groups). However, they may also examine the relative variability (variance) of those groups. For such comparisons, two relatively new effect size statistics, the log-transformed "variability ratio"…
Descriptors: Meta Analysis, Effect Size, Research Design, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Brunner, Martin; Keller, Lena; Stallasch, Sophie E.; Kretschmann, Julia; Hasl, Andrea; Preckel, Franzis; Lüdtke, Oliver; Hedges, Larry V. – Research Synthesis Methods, 2023
Descriptive analyses of socially important or theoretically interesting phenomena and trends are a vital component of research in the behavioral, social, economic, and health sciences. Such analyses yield reliable results when using representative individual participant data (IPD) from studies with complex survey designs, including educational…
Descriptors: Meta Analysis, Surveys, Research Design, Educational Research
Hedges, Larry V.; Schauer, Jacob M. – Journal of Educational and Behavioral Statistics, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Grantee Submission, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Pustejovsky, James Eric – AERA Online Paper Repository, 2017
Methods for meta-analyzing single-case designs (SCDs) are needed in order to inform evidence based practice in special education and to draw broader and more defensible generalizations in areas where SCDs comprise a large part of the research base. The most widely used outcomes in single-case research are measures of behavior collected using…
Descriptors: Effect Size, Research Design, Meta Analysis, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Lai, Mark H. C.; Kwok, Oi-Man – Journal of Educational and Behavioral Statistics, 2014
Multilevel modeling techniques are becoming more popular in handling data with multilevel structure in educational and behavioral research. Recently, researchers have paid more attention to cross-classified data structure that naturally arises in educational settings. However, unlike traditional single-level research, methodological studies about…
Descriptors: Hierarchical Linear Modeling, Differences, Effect Size, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tipton, Elizabeth; Pustejovsky, James E. – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Descriptors: Randomized Controlled Trials, Sample Size, Effect Size, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim – Journal of Experimental Education, 2014
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Descriptors: Effect Size, Statistical Bias, Sample Size, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Haber, Mason G.; Mazzotti, Valerie L.; Mustian, April L.; Rowe, Dawn A.; Bartholomew, Audrey L.; Test, David W.; Fowler, Catherine H. – Review of Educational Research, 2016
Students with disabilities experience poorer post-school outcomes compared with their peers without disabilities. Existing experimental literature on "what works" for improving these outcomes is rare; however, a rapidly growing body of research investigates correlational relationships between experiences in school and post-school…
Descriptors: Meta Analysis, Predictor Variables, Success, Postsecondary Education
Previous Page | Next Page »
Pages: 1  |  2  |  3