NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 57 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Timo Gnambs; Ulrich Schroeders – Research Synthesis Methods, 2024
Meta-analyses of treatment effects in randomized control trials are often faced with the problem of missing information required to calculate effect sizes and their sampling variances. Particularly, correlations between pre- and posttest scores are frequently not available. As an ad-hoc solution, researchers impute a constant value for the missing…
Descriptors: Accuracy, Meta Analysis, Randomized Controlled Trials, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Mikkel Helding Vembye; James Eric Pustejovsky; Therese Deocampo Pigott – Research Synthesis Methods, 2024
Sample size and statistical power are important factors to consider when planning a research synthesis. Power analysis methods have been developed for fixed effect or random effects models, but until recently these methods were limited to simple data structures with a single, independent effect per study. Recent work has provided power…
Descriptors: Sample Size, Robustness (Statistics), Effect Size, Social Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Vembye, Mikkel Helding; Pustejovsky, James Eric; Pigott, Therese Deocampo – Journal of Educational and Behavioral Statistics, 2023
Meta-analytic models for dependent effect sizes have grown increasingly sophisticated over the last few decades, which has created challenges for a priori power calculations. We introduce power approximations for tests of average effect sizes based upon several common approaches for handling dependent effect sizes. In a Monte Carlo simulation, we…
Descriptors: Meta Analysis, Robustness (Statistics), Statistical Analysis, Models
Deng, Lifang; Yuan, Ke-Hai – Grantee Submission, 2022
Structural equation modeling (SEM) has been deemed as a proper method when variables contain measurement errors. In contrast, path analysis with composite-scores is preferred for prediction and diagnosis of individuals. While path analysis with composite-scores has been criticized for yielding biased parameter estimates, recent literature pointed…
Descriptors: Structural Equation Models, Path Analysis, Weighted Scores, Error of Measurement
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Peer reviewed Peer reviewed
Direct linkDirect link
Edoardo G. Ostinelli; Orestis Efthimiou; Yan Luo; Clara Miguel; Eirini Karyotaki; Pim Cuijpers; Toshi A. Furukawa; Georgia Salanti; Andrea Cipriani – Research Synthesis Methods, 2024
When studies use different scales to measure continuous outcomes, standardised mean differences (SMD) are required to meta-analyse the data. However, outcomes are often reported as endpoint or change from baseline scores. Combining corresponding SMDs can be problematic and available guidance advises against this practice. We aimed to examine the…
Descriptors: Network Analysis, Meta Analysis, Depression (Psychology), Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Hartwig, Fernando P.; Davey Smith, George; Schmidt, Amand F.; Sterne, Jonathan A. C.; Higgins, Julian P. T.; Bowden, Jack – Research Synthesis Methods, 2020
Meta-analyses based on systematic literature reviews are commonly used to obtain a quantitative summary of the available evidence on a given topic. However, the reliability of any meta-analysis is constrained by that of its constituent studies. One major limitation is the possibility of small-study effects, when estimates from smaller and larger…
Descriptors: Meta Analysis, Research Methodology, Effect Size, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Waterbury, Glenn Thomas; DeMars, Christine E. – Journal of Experimental Education, 2019
There is a need for effect sizes that are readily interpretable by a broad audience. One index that might fill this need is [pi], which represents the proportion of scores in one group that exceed the mean of another group. The robustness of estimates of [pi] to violations of normality had not been explored. Using simulated data, three estimates…
Descriptors: Effect Size, Robustness (Statistics), Simulation, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Kowalski, Susan M.; Taylor, Joseph A.; Askinas, Karen M.; Wang, Qian; Zhang, Qi; Maddix, William P.; Tipton, Elizabeth – Journal of Research on Educational Effectiveness, 2020
Developing and maintaining a high-quality science teaching corps has become increasingly urgent with standards that require students to move beyond mastering facts to reasoning and arguing from evidence. "Effective" professional development (PD) for science teachers enhances teacher outcomes and, in turn, enhances primary and secondary…
Descriptors: Effect Size, Faculty Development, Science Teachers, Program Effectiveness
Jamshidi, Laleh; Declercq, Lies; Fernández-Castilla, Belén; Ferron, John M.; Moeyaert, Mariola; Beretvas, S. Natasha; Van den Noortgate, Wim – Grantee Submission, 2020
The focus of the current study is on handling the dependence among multiple regression coefficients representing the treatment effects when meta-analyzing data from single-case experimental studies. We compare the results when applying three different multilevel meta-analytic models (i.e., a univariate multilevel model avoiding the dependence, a…
Descriptors: Multivariate Analysis, Hierarchical Linear Modeling, Meta Analysis, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Henry May; Aly Blakeney – AERA Online Paper Repository, 2022
This paper presents evidence confirming the validity of the RD design in the Reading Recovery study by examining the ability of the RD design to replicate the 1st grade results observed in the original i3 RCT focused on short-term impacts. Over 1,800 schools participated in the RD study over all four cohort years. The RD design used cutoff-based…
Descriptors: Reading Programs, Reading Instruction, Cutting Scores, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Moeyaert, Mariola; Ugille, Maaike; Natasha Beretvas, S.; Ferron, John; Bunuan, Rommel; Van den Noortgate, Wim – International Journal of Social Research Methodology, 2017
This study investigates three methods to handle dependency among effect size estimates in meta-analysis arising from studies reporting multiple outcome measures taken on the same sample. The three-level approach is compared with the method of robust variance estimation, and with averaging effects within studies. A simulation study is performed,…
Descriptors: Meta Analysis, Effect Size, Robustness (Statistics), Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
May, Henry; Jones, Akisha; Blakeney, Aly – AERA Online Paper Repository, 2019
Using an RD design provides statistically robust estimates while allowing researchers a different causal estimation tool to be used in educational environments where an RCT may not be feasible. Results from External Evaluation of the i3 Scale-Up of Reading Recovery show that impact estimates were remarkably similar between a randomized control…
Descriptors: Regression (Statistics), Research Design, Randomized Controlled Trials, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Wilcox, Rand R.; Serang, Sarfaraz – Educational and Psychological Measurement, 2017
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Descriptors: Hypothesis Testing, Bayesian Statistics, Computation, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Gorard, Stephen; Gorard, Jonathan – International Journal of Social Research Methodology, 2016
This brief paper introduces a new approach to assessing the trustworthiness of research comparisons when expressed numerically. The 'number needed to disturb' a research finding would be the number of counterfactual values that can be added to the smallest arm of any comparison before the difference or 'effect' size disappears, minus the number of…
Descriptors: Statistical Significance, Testing, Sampling, Attrition (Research Studies)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4