NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Joemari Olea; Kevin Carl Santos – Journal of Educational and Behavioral Statistics, 2024
Although the generalized deterministic inputs, noisy "and" gate model (G-DINA; de la Torre, 2011) is a general cognitive diagnosis model (CDM), it does not account for the heterogeneity that is rooted from the existing latent groups in the population of examinees. To address this, this study proposes the mixture G-DINA model, a CDM that…
Descriptors: Cognitive Measurement, Models, Algorithms, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Adrian Quintero; Emmanuel Lesaffre; Geert Verbeke – Journal of Educational and Behavioral Statistics, 2024
Bayesian methods to infer model dimensionality in factor analysis generally assume a lower triangular structure for the factor loadings matrix. Consequently, the ordering of the outcomes influences the results. Therefore, we propose a method to infer model dimensionality without imposing any prior restriction on the loadings matrix. Our approach…
Descriptors: Bayesian Statistics, Factor Analysis, Factor Structure, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Daoxuan Fu; Chunying Qin; Zhaosheng Luo; Yujun Li; Xiaofeng Yu; Ziyu Ye – Journal of Educational and Behavioral Statistics, 2025
One of the central components of cognitive diagnostic assessment is the Q-matrix, which is an essential loading indicator matrix and is typically constructed by subject matter experts. Nonetheless, to a large extent, the construction of Q-matrix remains a subjective process and might lead to misspecifications. Many researchers have recognized the…
Descriptors: Q Methodology, Matrices, Diagnostic Tests, Cognitive Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Yannick Rothacher; Carolin Strobl – Journal of Educational and Behavioral Statistics, 2024
Random forests are a nonparametric machine learning method, which is currently gaining popularity in the behavioral sciences. Despite random forests' potential advantages over more conventional statistical methods, a remaining question is how reliably informative predictor variables can be identified by means of random forests. The present study…
Descriptors: Predictor Variables, Selection Criteria, Behavioral Sciences, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Kazuhiro Yamaguchi – Journal of Educational and Behavioral Statistics, 2025
This study proposes a Bayesian method for diagnostic classification models (DCMs) for a partially known Q-matrix setting between exploratory and confirmatory DCMs. This Q-matrix setting is practical and useful because test experts have pre-knowledge of the Q-matrix but cannot readily specify it completely. The proposed method employs priors for…
Descriptors: Models, Classification, Bayesian Statistics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Joakim Wallmark; James O. Ramsay; Juan Li; Marie Wiberg – Journal of Educational and Behavioral Statistics, 2024
Item response theory (IRT) models the relationship between the possible scores on a test item against a test taker's attainment of the latent trait that the item is intended to measure. In this study, we compare two models for tests with polytomously scored items: the optimal scoring (OS) model, a nonparametric IRT model based on the principles of…
Descriptors: Item Response Theory, Test Items, Models, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Mingya Huang; David Kaplan – Journal of Educational and Behavioral Statistics, 2025
The issue of model uncertainty has been gaining interest in education and the social sciences community over the years, and the dominant methods for handling model uncertainty are based on Bayesian inference, particularly, Bayesian model averaging. However, Bayesian model averaging assumes that the true data-generating model is within the…
Descriptors: Bayesian Statistics, Hierarchical Linear Modeling, Statistical Inference, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Jordan M. Wheeler; Allan S. Cohen; Shiyu Wang – Journal of Educational and Behavioral Statistics, 2024
Topic models are mathematical and statistical models used to analyze textual data. The objective of topic models is to gain information about the latent semantic space of a set of related textual data. The semantic space of a set of textual data contains the relationship between documents and words and how they are used. Topic models are becoming…
Descriptors: Semantics, Educational Assessment, Evaluators, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Na Shan; Ping-Feng Xu – Journal of Educational and Behavioral Statistics, 2025
The detection of differential item functioning (DIF) is important in psychological and behavioral sciences. Standard DIF detection methods perform an item-by-item test iteratively, often assuming that all items except the one under investigation are DIF-free. This article proposes a Bayesian adaptive Lasso method to detect DIF in graded response…
Descriptors: Bayesian Statistics, Item Response Theory, Adolescents, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory