Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 18 |
Since 2006 (last 20 years) | 44 |
Descriptor
Hypothesis Testing | 106 |
Monte Carlo Methods | 106 |
Statistical Analysis | 45 |
Correlation | 29 |
Comparative Analysis | 27 |
Statistical Significance | 24 |
Sample Size | 23 |
Mathematical Models | 19 |
Research Methodology | 18 |
Analysis of Variance | 16 |
Research Design | 14 |
More ▼ |
Source
Author
Publication Type
Education Level
Higher Education | 4 |
Elementary Education | 3 |
Middle Schools | 3 |
Postsecondary Education | 2 |
Early Childhood Education | 1 |
Grade 3 | 1 |
Grade 7 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Primary Education | 1 |
Secondary Education | 1 |
More ▼ |
Audience
Researchers | 9 |
Practitioners | 1 |
Students | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Motivated Strategies for… | 1 |
Multifactor Leadership… | 1 |
What Works Clearinghouse Rating
Bradley David Rogers – ProQuest LLC, 2022
Considered normative from the second half of the 20th century (Danziger, 1990), null hypothesis statistical testing (NHST) has received consistent, largely unheeded criticism. Critiques have received more attention in recent years with the recognition of the replication crisis in the social sciences and the American Statistical Society's statement…
Descriptors: Statistical Analysis, Hypothesis Testing, History, Monte Carlo Methods
Shunji Wang; Katerina M. Marcoulides; Jiashan Tang; Ke-Hai Yuan – Structural Equation Modeling: A Multidisciplinary Journal, 2024
A necessary step in applying bi-factor models is to evaluate the need for domain factors with a general factor in place. The conventional null hypothesis testing (NHT) was commonly used for such a purpose. However, the conventional NHT meets challenges when the domain loadings are weak or the sample size is insufficient. This article proposes…
Descriptors: Hypothesis Testing, Error of Measurement, Comparative Analysis, Monte Carlo Methods
Vembye, Mikkel Helding; Pustejovsky, James Eric; Pigott, Therese Deocampo – Journal of Educational and Behavioral Statistics, 2023
Meta-analytic models for dependent effect sizes have grown increasingly sophisticated over the last few decades, which has created challenges for a priori power calculations. We introduce power approximations for tests of average effect sizes based upon several common approaches for handling dependent effect sizes. In a Monte Carlo simulation, we…
Descriptors: Meta Analysis, Robustness (Statistics), Statistical Analysis, Models
Clemens Draxler; Andreas Kurz; Can Gürer; Jan Philipp Nolte – Journal of Educational and Behavioral Statistics, 2024
A modified and improved inductive inferential approach to evaluate item discriminations in a conditional maximum likelihood and Rasch modeling framework is suggested. The new approach involves the derivation of four hypothesis tests. It implies a linear restriction of the assumed set of probability distributions in the classical approach that…
Descriptors: Inferences, Test Items, Item Analysis, Maximum Likelihood Statistics
Nordstokke, David W.; Colp, S. Mitchell – Practical Assessment, Research & Evaluation, 2018
Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…
Descriptors: Nonparametric Statistics, Statistical Analysis, Monte Carlo Methods, Sample Size
Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy – Educational and Psychological Measurement, 2016
Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…
Descriptors: Accuracy, Factor Analysis, Hypothesis Testing, Monte Carlo Methods
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Gnambs, Timo; Staufenbiel, Thomas – Research Synthesis Methods, 2016
Two new methods for the meta-analysis of factor loadings are introduced and evaluated by Monte Carlo simulations. The direct method pools each factor loading individually, whereas the indirect method synthesizes correlation matrices reproduced from factor loadings. The results of the two simulations demonstrated that the accuracy of…
Descriptors: Accuracy, Meta Analysis, Factor Structure, Monte Carlo Methods
Pan, Tianshu; Yin, Yue – Applied Measurement in Education, 2017
In this article, we propose using the Bayes factors (BF) to evaluate person fit in item response theory models under the framework of Bayesian evaluation of an informative diagnostic hypothesis. We first discuss the theoretical foundation for this application and how to analyze person fit using BF. To demonstrate the feasibility of this approach,…
Descriptors: Bayesian Statistics, Goodness of Fit, Item Response Theory, Monte Carlo Methods
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo – Educational and Psychological Measurement, 2015
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
Descriptors: Factor Analysis, Error of Measurement, Accuracy, Hypothesis Testing
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
An, Chen; Braun, Henry; Walsh, Mary E. – Educational Measurement: Issues and Practice, 2018
Making causal inferences from a quasi-experiment is difficult. Sensitivity analysis approaches to address hidden selection bias thus have gained popularity. This study serves as an introduction to a simple but practical form of sensitivity analysis using Monte Carlo simulation procedures. We examine estimated treatment effects for a school-based…
Descriptors: Statistical Inference, Intervention, Program Effectiveness, Quasiexperimental Design
Spencer, Bryden – ProQuest LLC, 2016
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
Descriptors: Monte Carlo Methods, Comparative Analysis, Accuracy, High Stakes Tests
Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J. – Journal of Memory and Language, 2013
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the…
Descriptors: Hypothesis Testing, Psycholinguistics, Models, Monte Carlo Methods