NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Comprehensive Employment and…1
What Works Clearinghouse Rating
Showing 1 to 15 of 136 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Matthew Forte; Elizabeth Tipton – Society for Research on Educational Effectiveness, 2024
Background/Context: Over the past twenty plus years, the What Works Clearinghouse (WWC) has reviewed over 1,700 studies, cataloging effect sizes for 189 interventions. Some 56% of these interventions include results from multiple, independent studies; on average, these include results of [approximately]3 studies, though some include as many as 32…
Descriptors: Meta Analysis, Sampling, Effect Size, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Brannick, Michael T.; French, Kimberly A.; Rothstein, Hannah R.; Kiselica, Andrew M.; Apostoloski, Nenad – Research Synthesis Methods, 2021
Tolerance intervals provide a bracket intended to contain a percentage (e.g., 80%) of a population distribution given sample estimates of the mean and variance. In random-effects meta-analysis, tolerance intervals should contain researcher-specified proportions of underlying population effect sizes. Using Monte Carlo simulation, we investigated…
Descriptors: Meta Analysis, Credibility, Intervals, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Ngocvan Bui; Jessica Collier; Yunus Emre Ozturk; Donggil Song – TechTrends: Linking Research and Practice to Improve Learning, 2025
The popularity of generative AI chatbots, such as ChatGPT, has sparked numerous studies investigating their use in educational contexts. However, it is important to note that chatbots are not a new phenomenon; researchers have explored conversational agents across diverse fields for decades. Conversational agents engage users in natural language…
Descriptors: Computer Mediated Communication, Artificial Intelligence, Technology Uses in Education, Technology Integration
Peer reviewed Peer reviewed
PDF on ERIC Download full text
van Laar, Saskia; Braeken, Johan – Practical Assessment, Research & Evaluation, 2021
Despite the sensitivity of fit indices to various model and data characteristics in structural equation modeling, these fit indices are used in a rigid binary fashion as a mere rule of thumb threshold value in a search for model adequacy. Here, we address the behavior and interpretation of the popular Comparative Fit Index (CFI) by stressing that…
Descriptors: Goodness of Fit, Structural Equation Models, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Schnell, Rainer; Thomas, Kathrin – Sociological Methods & Research, 2023
This article provides a meta-analysis of studies using the crosswise model (CM) in estimating the prevalence of sensitive characteristics in different samples and populations. On a data set of 141 items published in 33 either articles or books, we compare the difference ([delta]) between estimates based on the CM and a direct question (DQ). The…
Descriptors: Meta Analysis, Models, Comparative Analysis, Publications
Peer reviewed Peer reviewed
Direct linkDirect link
Stanley, T. D.; Doucouliagos, Hristos – Research Synthesis Methods, 2023
Partial correlation coefficients are often used as effect sizes in the meta-analysis and systematic review of multiple regression analysis research results. There are two well-known formulas for the variance and thereby for the standard error (SE) of partial correlation coefficients (PCC). One is considered the "correct" variance in the…
Descriptors: Correlation, Statistical Bias, Error Patterns, Error Correction
Peer reviewed Peer reviewed
Direct linkDirect link
Poom, Leo; af Wåhlberg, Anders – Research Synthesis Methods, 2022
In meta-analysis, effect sizes often need to be converted into a common metric. For this purpose conversion formulas have been constructed; some are exact, others are approximations whose accuracy has not yet been systematically tested. We performed Monte Carlo simulations where samples with pre-specified population correlations between the…
Descriptors: Meta Analysis, Effect Size, Mathematical Formulas, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Dennis, Minyi Shih; Sorrells, Audrey M.; Chovanes, Jacquelyn; Kiru, Elisheba W. – Learning Disability Quarterly, 2022
This meta-analysis examined the ecological and population validity of intervention research for students with low mathematics achievement (SWLMA). Forty-four studies published between 2005 and 2019 that met the inclusionary criterion were included in this analysis. Our findings suggest, to improve the external validity and generalizability of…
Descriptors: Mathematics Achievement, Low Achievement, Intervention, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Xinya; Kamata, Akihito; Li, Ji – Educational and Psychological Measurement, 2020
One important issue in Bayesian estimation is the determination of an effective informative prior. In hierarchical Bayes models, the uncertainty of hyperparameters in a prior can be further modeled via their own priors, namely, hyper priors. This study introduces a framework to construct hyper priors for both the mean and the variance…
Descriptors: Bayesian Statistics, Randomized Controlled Trials, Effect Size, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Senior, Alistair M.; Viechtbauer, Wolfgang; Nakagawa, Shinichi – Research Synthesis Methods, 2020
Meta-analyses are often used to estimate the relative average values of a quantitative outcome in two groups (eg, control and experimental groups). However, they may also examine the relative variability (variance) of those groups. For such comparisons, two relatively new effect size statistics, the log-transformed "variability ratio"…
Descriptors: Meta Analysis, Effect Size, Research Design, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Brunner, Martin; Keller, Lena; Stallasch, Sophie E.; Kretschmann, Julia; Hasl, Andrea; Preckel, Franzis; Lüdtke, Oliver; Hedges, Larry V. – Research Synthesis Methods, 2023
Descriptive analyses of socially important or theoretically interesting phenomena and trends are a vital component of research in the behavioral, social, economic, and health sciences. Such analyses yield reliable results when using representative individual participant data (IPD) from studies with complex survey designs, including educational…
Descriptors: Meta Analysis, Surveys, Research Design, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Toste, Jessica R.; Logan, Jessica A. R.; Shogren, Karrie A.; Boyd, Brian A. – Exceptional Children, 2023
Group design research studies can provide evidence to draw conclusions about "what works," "for whom," and "under what conditions" in special education. The quality indicators introduced by Gersten and colleagues (2005) have contributed to increased rigor in group design research, which has provided substantial…
Descriptors: Research Design, Educational Research, Special Education, Educational Indicators
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Weidong; Xie, Xiuye; Li, Yilin; Chen, Ruth; Shen, Hejun – Research Quarterly for Exercise and Sport, 2021
Background/Purpose: The purpose of the present study was to review intervention studies in school physical education, with a goal of identifying the gaps and future trends of intervention research in the field of physical education. Methods: A total of 71 quantitative experimental studies were identified by manually examining all the articles…
Descriptors: Physical Education, Elementary Secondary Education, Intervention, Educational Research
Hedges, Larry V.; Schauer, Jacob M. – Journal of Educational and Behavioral Statistics, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Hedges, Larry V.; Schauer, Jacob M. – Grantee Submission, 2019
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the…
Descriptors: Replication (Evaluation), Research Design, Research Methodology, Program Evaluation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10