NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Journal of Educational and…46
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 46 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Joemari Olea; Kevin Carl Santos – Journal of Educational and Behavioral Statistics, 2024
Although the generalized deterministic inputs, noisy "and" gate model (G-DINA; de la Torre, 2011) is a general cognitive diagnosis model (CDM), it does not account for the heterogeneity that is rooted from the existing latent groups in the population of examinees. To address this, this study proposes the mixture G-DINA model, a CDM that…
Descriptors: Cognitive Measurement, Models, Algorithms, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Daoxuan Fu; Chunying Qin; Zhaosheng Luo; Yujun Li; Xiaofeng Yu; Ziyu Ye – Journal of Educational and Behavioral Statistics, 2025
One of the central components of cognitive diagnostic assessment is the Q-matrix, which is an essential loading indicator matrix and is typically constructed by subject matter experts. Nonetheless, to a large extent, the construction of Q-matrix remains a subjective process and might lead to misspecifications. Many researchers have recognized the…
Descriptors: Q Methodology, Matrices, Diagnostic Tests, Cognitive Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kazuhiro Yamaguchi – Journal of Educational and Behavioral Statistics, 2025
This study proposes a Bayesian method for diagnostic classification models (DCMs) for a partially known Q-matrix setting between exploratory and confirmatory DCMs. This Q-matrix setting is practical and useful because test experts have pre-knowledge of the Q-matrix but cannot readily specify it completely. The proposed method employs priors for…
Descriptors: Models, Classification, Bayesian Statistics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Joakim Wallmark; James O. Ramsay; Juan Li; Marie Wiberg – Journal of Educational and Behavioral Statistics, 2024
Item response theory (IRT) models the relationship between the possible scores on a test item against a test taker's attainment of the latent trait that the item is intended to measure. In this study, we compare two models for tests with polytomously scored items: the optimal scoring (OS) model, a nonparametric IRT model based on the principles of…
Descriptors: Item Response Theory, Test Items, Models, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2023
Multiple imputation (MI) is a popular method for handling missing data. In education research, it can be challenging to use MI because the data often have a clustered structure that need to be accommodated during MI. Although much research has considered applications of MI in hierarchical data, little is known about its use in cross-classified…
Descriptors: Educational Research, Data Analysis, Error of Measurement, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Jin – Journal of Educational and Behavioral Statistics, 2022
Longitudinal data analysis has been widely employed to examine between-individual differences in within-individual changes. One challenge of such analyses is that the rate-of-change is only available indirectly when change patterns are nonlinear with respect to time. Latent change score models (LCSMs), which can be employed to investigate the…
Descriptors: Longitudinal Studies, Individual Differences, Scores, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Na Shan; Ping-Feng Xu – Journal of Educational and Behavioral Statistics, 2025
The detection of differential item functioning (DIF) is important in psychological and behavioral sciences. Standard DIF detection methods perform an item-by-item test iteratively, often assuming that all items except the one under investigation are DIF-free. This article proposes a Bayesian adaptive Lasso method to detect DIF in graded response…
Descriptors: Bayesian Statistics, Item Response Theory, Adolescents, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Su-Pin; Huang, Hung-Yu – Journal of Educational and Behavioral Statistics, 2022
To address response style or bias in rating scales, forced-choice items are often used to request that respondents rank their attitudes or preferences among a limited set of options. The rating scales used by raters to render judgments on ratees' performance also contribute to rater bias or errors; consequently, forced-choice items have recently…
Descriptors: Evaluation Methods, Rating Scales, Item Analysis, Preferences
Sweet, Tracy M. – Journal of Educational and Behavioral Statistics, 2019
There are some educational interventions aimed at changing the ways in which individuals interact, and social networks are particularly useful for quantifying these changes. For many of these interventions, the ultimate goal is to change some outcome of interest such as teacher quality or student achievement, and social networks act as a natural…
Descriptors: Interaction, Intervention, Mediation Theory, Social Networks
Peer reviewed Peer reviewed
Direct linkDirect link
Harel, Daphna; Steele, Russell J. – Journal of Educational and Behavioral Statistics, 2018
Collapsing categories is a commonly used data reduction technique; however, to date there do not exist principled methods to determine whether collapsing categories is appropriate in practice. With ordinal responses under the partial credit model, when collapsing categories, the true model for the collapsed data is no longer a partial credit…
Descriptors: Matrices, Models, Item Response Theory, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Magnus, Brooke E.; Thissen, David – Journal of Educational and Behavioral Statistics, 2017
Questionnaires that include items eliciting count responses are becoming increasingly common in psychology. This study proposes methodological techniques to overcome some of the challenges associated with analyzing multivariate item response data that exhibit zero inflation, maximum inflation, and heaping at preferred digits. The modeling…
Descriptors: Item Response Theory, Models, Multivariate Analysis, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Romero, Mauricio; Riascos, Álvaro; Jara, Diego – Journal of Educational and Behavioral Statistics, 2015
Multiple-choice exams are frequently used as an efficient and objective method to assess learning, but they are more vulnerable to answer copying than tests based on open questions. Several statistical tests (known as indices in the literature) have been proposed to detect cheating; however, to the best of our knowledge, they all lack mathematical…
Descriptors: Cheating, Multiple Choice Tests, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4