NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 202511
Since 202441
Audience
Teachers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 41 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jason A. Schoeneberger; Christopher Rhoads – American Journal of Evaluation, 2025
Regression discontinuity (RD) designs are increasingly used for causal evaluations. However, the literature contains little guidance for conducting a moderation analysis within an RDD context. The current article focuses on moderation with a single binary variable. A simulation study compares: (1) different bandwidth selectors and (2) local…
Descriptors: Regression (Statistics), Causal Models, Evaluation Methods, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Yan Xia; Xinchang Zhou – Educational and Psychological Measurement, 2025
Parallel analysis has been considered one of the most accurate methods for determining the number of factors in factor analysis. One major advantage of parallel analysis over traditional factor retention methods (e.g., Kaiser's rule) is that it addresses the sampling variability of eigenvalues obtained from the identity matrix, representing the…
Descriptors: Factor Analysis, Statistical Analysis, Evaluation Methods, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Guido Schwarzer; Gerta Rücker; Cristina Semaca – Research Synthesis Methods, 2024
The "LFK" index has been promoted as an improved method to detect bias in meta-analysis. Putatively, its performance does not depend on the number of studies in the meta-analysis. We conducted a simulation study, comparing the "LFK" index test to three standard tests for funnel plot asymmetry in settings with smaller or larger…
Descriptors: Bias, Meta Analysis, Simulation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jihong Zhang; Jonathan Templin; Xinya Liang – Journal of Educational Measurement, 2024
Recently, Bayesian diagnostic classification modeling has been becoming popular in health psychology, education, and sociology. Typically information criteria are used for model selection when researchers want to choose the best model among alternative models. In Bayesian estimation, posterior predictive checking is a flexible Bayesian model…
Descriptors: Bayesian Statistics, Cognitive Measurement, Models, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tugay Kaçak; Abdullah Faruk Kiliç – International Journal of Assessment Tools in Education, 2025
Researchers continue to choose PCA in scale development and adaptation studies because it is the default setting and overestimates measurement quality. When PCA is utilized in investigations, the explained variance and factor loadings can be exaggerated. PCA, in contrast to the models given in the literature, should be investigated in…
Descriptors: Factor Analysis, Monte Carlo Methods, Mathematical Models, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Benjamin Lugu – Applied Measurement in Education, 2024
Researchers who use measurement models for evaluation purposes often select models with stringent requirements, such as Rasch models, which are parametric. Mokken Scale Analysis (MSA) offers a theory-driven nonparametric modeling approach that may be more appropriate for some measurement applications. Researchers have discussed using MSA as a…
Descriptors: Item Response Theory, Data Analysis, Simulation, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Martin Bäckström; Fredrik Björklund – Educational and Psychological Measurement, 2024
The forced-choice response format is often considered superior to the standard Likert-type format for controlling social desirability in personality inventories. We performed simulations and found that the trait information based on the two formats converges when the number of items is high and forced-choice items are mixed with regard to…
Descriptors: Likert Scales, Item Analysis, Personality Traits, Personality Measures
Peer reviewed Peer reviewed
Direct linkDirect link
A. M. Sadek; Fahad Al-Muhlaki – Measurement: Interdisciplinary Research and Perspectives, 2024
In this study, the accuracy of the artificial neural network (ANN) was assessed considering the uncertainties associated with the randomness of the data and the lack of learning. The Monte-Carlo algorithm was applied to simulate the randomness of the input variables and evaluate the output distribution. It has been shown that under certain…
Descriptors: Monte Carlo Methods, Accuracy, Artificial Intelligence, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Johan Lyrvall; Zsuzsa Bakk; Jennifer Oser; Roberto Di Mari – Structural Equation Modeling: A Multidisciplinary Journal, 2024
We present a bias-adjusted three-step estimation approach for multilevel latent class models (LC) with covariates. The proposed approach involves (1) fitting a single-level measurement model while ignoring the multilevel structure, (2) assigning units to latent classes, and (3) fitting the multilevel model with the covariates while controlling for…
Descriptors: Hierarchical Linear Modeling, Statistical Bias, Error of Measurement, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Suppanut Sriutaisuk; Yu Liu; Seungwon Chung; Hanjoe Kim; Fei Gu – Educational and Psychological Measurement, 2025
The multiple imputation two-stage (MI2S) approach holds promise for evaluating the model fit of structural equation models for ordinal variables with multiply imputed data. However, previous studies only examined the performance of MI2S-based residual-based test statistics. This study extends previous research by examining the performance of two…
Descriptors: Structural Equation Models, Error of Measurement, Programming Languages, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Kazuhiro Yamaguchi – Journal of Educational and Behavioral Statistics, 2025
This study proposes a Bayesian method for diagnostic classification models (DCMs) for a partially known Q-matrix setting between exploratory and confirmatory DCMs. This Q-matrix setting is practical and useful because test experts have pre-knowledge of the Q-matrix but cannot readily specify it completely. The proposed method employs priors for…
Descriptors: Models, Classification, Bayesian Statistics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Pere J. Ferrando; Ana Hernández-Dorado; Urbano Lorenzo-Seva – Structural Equation Modeling: A Multidisciplinary Journal, 2024
A frequent criticism of exploratory factor analysis (EFA) is that it does not allow correlated residuals to be modelled, while they can be routinely specified in the confirmatory (CFA) model. In this article, we propose an EFA approach in which both the common factor solution and the residual matrix are unrestricted (i.e., the correlated residuals…
Descriptors: Correlation, Factor Analysis, Models, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Xiao Liu; Zhiyong Zhang; Lijuan Wang – Grantee Submission, 2024
In psychology, researchers are often interested in testing hypotheses about mediation, such as testing the presence of a mediation effect of a treatment (e.g., intervention assignment) on an outcome via a mediator. An increasingly popular approach to testing hypotheses is the Bayesian testing approach with Bayes factors (BFs). Despite the growing…
Descriptors: Sample Size, Bayesian Statistics, Programming Languages, Simulation
Previous Page | Next Page »
Pages: 1  |  2  |  3