NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Netherlands1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Elliott, Mark; Buttery, Paula – Educational and Psychological Measurement, 2022
We investigate two non-iterative estimation procedures for Rasch models, the pair-wise estimation procedure (PAIR) and the Eigenvector method (EVM), and identify theoretical issues with EVM for rating scale model (RSM) threshold estimation. We develop a new procedure to resolve these issues--the conditional pairwise adjacent thresholds procedure…
Descriptors: Item Response Theory, Rating Scales, Computation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Rubio-Aparicio, María; López-López, José Antonio; Viechtbauer, Wolfgang; Marín-Martínez, Fulgencio; Botella, Juan; Sánchez-Meca, Julio – Journal of Experimental Education, 2020
Mixed-effects models can be used to examine the association between a categorical moderator and the magnitude of the effect size. Two approaches are available to estimate the residual between-studies variance, t[superscript 2][subscript res] --namely, separate estimation within each category of the moderator versus pooled estimation across all…
Descriptors: Meta Analysis, Effect Size, Computation, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2020
The What Works Clearinghouse (WWC) is an initiative of the U.S. Department of Education's Institute of Education Sciences (IES), which was established under the Education Sciences Reform Act of 2002. It is an important part of IES's strategy to use rigorous and relevant research, evaluation, and statistics to improve the nation's education system.…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
García-Pérez, Miguel A. – Educational and Psychological Measurement, 2017
Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…
Descriptors: Hypothesis Testing, Statistical Inference, Effect Size, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Gorard, Stephen; Gorard, Jonathan – International Journal of Social Research Methodology, 2016
This brief paper introduces a new approach to assessing the trustworthiness of research comparisons when expressed numerically. The 'number needed to disturb' a research finding would be the number of counterfactual values that can be added to the smallest arm of any comparison before the difference or 'effect' size disappears, minus the number of…
Descriptors: Statistical Significance, Testing, Sampling, Attrition (Research Studies)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2017
The What Works Clearinghouse (WWC) systematic review process is the basis of many of its products, enabling the WWC to use consistent, objective, and transparent standards and procedures in its reviews, while also ensuring comprehensive coverage of the relevant literature. The WWC systematic review process consists of five steps: (1) Developing…
Descriptors: Educational Research, Evaluation Methods, Evidence, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Ruscio, John; Gera, Benjamin Lee – Multivariate Behavioral Research, 2013
Researchers are strongly encouraged to accompany the results of statistical tests with appropriate estimates of effect size. For 2-group comparisons, a probability-based effect size estimator ("A") has many appealing properties (e.g., it is easy to understand, robust to violations of parametric assumptions, insensitive to outliers). We review…
Descriptors: Psychological Studies, Gender Differences, Researchers, Test Results
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Citkowicz, Martyna; Hedges, Larry V. – Society for Research on Educational Effectiveness, 2013
In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…
Descriptors: Multivariate Analysis, Effect Size, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L. – Educational and Psychological Measurement, 2012
The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…
Descriptors: Computation, Statistical Analysis, Hypothesis Testing, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Gillespie, Amy; Graham, Steve – Exceptional Children, 2014
In this meta-analysis, the impact of writing interventions on the quality of writing produced by students with learning disabilities (LD) was assessed. Ancestral and electronic searches were used to locate experimental, quasi-experimental, and within-subjects design studies with participants in Grades 1-12 with documented LD. The effects of…
Descriptors: Meta Analysis, Intervention, Writing Instruction, Learning Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Gentry, Marcia; Peters, Scott J. – Gifted Child Quarterly, 2009
Recent calls for reporting and interpreting effect sizes have been numerous, with the 5th edition of the "Publication Manual of the American Psychological Association" (2001) calling for the inclusion of effect sizes to interpret quantitative findings. Many top journals have required that effect sizes accompany claims of statistical significance.…
Descriptors: Effect Size, Gifted, Educational Research, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Carvajal, Jorge; Skorupski, William P. – Educational and Psychological Measurement, 2010
This study is an evaluation of the behavior of the Liu-Agresti estimator of the cumulative common odds ratio when identifying differential item functioning (DIF) with polytomously scored test items using small samples. The Liu-Agresti estimator has been proposed by Penfield and Algina as a promising approach for the study of polytomous DIF but no…
Descriptors: Test Bias, Sample Size, Test Items, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2014
This "What Works Clearinghouse Procedures and Standards Handbook (Version 3.0)" provides a detailed description of the standards and procedures of the What Works Clearinghouse (WWC). The remaining chapters of this Handbook are organized to take the reader through the basic steps that the WWC uses to develop a review protocol, identify…
Descriptors: Educational Research, Guides, Intervention, Classification
Moses, Tim; Miao, Jing; Dorans, Neil – Educational Testing Service, 2010
This study compared the accuracies of four differential item functioning (DIF) estimation methods, where each method makes use of only one of the following: raw data, logistic regression, loglinear models, or kernel smoothing. The major focus was on the estimation strategies' potential for estimating score-level, conditional DIF. A secondary focus…
Descriptors: Test Bias, Statistical Analysis, Computation, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Bruce – Middle Grades Research Journal, 2009
The present article provides a primer on using effect sizes in research. A small heuristic data set is used in order to make the discussion concrete. Additionally, various admonitions for best practice in reporting and interpreting effect sizes are presented. Among these is the admonition to not use Cohen's benchmarks for "small," "medium," and…
Descriptors: Educational Research, Effect Size, Computation, Research Methodology
Previous Page | Next Page »
Pages: 1  |  2