NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Joemari Olea; Kevin Carl Santos – Journal of Educational and Behavioral Statistics, 2024
Although the generalized deterministic inputs, noisy "and" gate model (G-DINA; de la Torre, 2011) is a general cognitive diagnosis model (CDM), it does not account for the heterogeneity that is rooted from the existing latent groups in the population of examinees. To address this, this study proposes the mixture G-DINA model, a CDM that…
Descriptors: Cognitive Measurement, Models, Algorithms, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kazuhiro Yamaguchi – Journal of Educational and Behavioral Statistics, 2025
This study proposes a Bayesian method for diagnostic classification models (DCMs) for a partially known Q-matrix setting between exploratory and confirmatory DCMs. This Q-matrix setting is practical and useful because test experts have pre-knowledge of the Q-matrix but cannot readily specify it completely. The proposed method employs priors for…
Descriptors: Models, Classification, Bayesian Statistics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2023
Multiple imputation (MI) is a popular method for handling missing data. In education research, it can be challenging to use MI because the data often have a clustered structure that need to be accommodated during MI. Although much research has considered applications of MI in hierarchical data, little is known about its use in cross-classified…
Descriptors: Educational Research, Data Analysis, Error of Measurement, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Harel, Daphna; Steele, Russell J. – Journal of Educational and Behavioral Statistics, 2018
Collapsing categories is a commonly used data reduction technique; however, to date there do not exist principled methods to determine whether collapsing categories is appropriate in practice. With ordinal responses under the partial credit model, when collapsing categories, the true model for the collapsed data is no longer a partial credit…
Descriptors: Matrices, Models, Item Response Theory, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Wendy – Journal of Educational and Behavioral Statistics, 2018
Policymakers have grown increasingly interested in how experimental results may generalize to a larger population. However, recently developed propensity score-based methods are limited by small sample sizes, where the experimental study is generalized to a population that is at least 20 times larger. This is particularly problematic for methods…
Descriptors: Computation, Generalization, Probability, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Nydick, Steven W. – Journal of Educational and Behavioral Statistics, 2014
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Descriptors: Probability, Item Response Theory, Models, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Longford, Nicholas T. – Journal of Educational and Behavioral Statistics, 2014
A method for medical screening is adapted to differential item functioning (DIF). Its essential elements are explicit declarations of the level of DIF that is acceptable and of the loss function that quantifies the consequences of the two kinds of inappropriate classification of an item. Instead of a single level and a single function, sets of…
Descriptors: Test Items, Test Bias, Simulation, Hypothesis Testing
Peer reviewed Peer reviewed
Bolt, Daniel M.; Cohen, Allan S.; Wollack, James A. – Journal of Educational and Behavioral Statistics, 2001
Proposes a mixture item response model for investigating individual differences in the selection of response categories in multiple choice items. A real data example illustrates how the model can be used to distinguish examinees disproportionately attracted to different types of distractors, and a simulation study evaluates item parameter recovery…
Descriptors: Classification, Individual Differences, Item Response Theory, Mathematical Models