NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 81 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Yamaguchi, Kazuhiro; Zhang, Jihong – Journal of Educational Measurement, 2023
This study proposed Gibbs sampling algorithms for variable selection in a latent regression model under a unidimensional two-parameter logistic item response theory model. Three types of shrinkage priors were employed to obtain shrinkage estimates: double-exponential (i.e., Laplace), horseshoe, and horseshoe+ priors. These shrinkage priors were…
Descriptors: Algorithms, Simulation, Mathematics Achievement, Bayesian Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kara, Yusuf; Kamata, Akihito – Journal of Experimental Education, 2022
Within-cluster variance homogeneity is one of the key assumptions of multilevel models; however, assuming a constant (i.e. equal) within-cluster variance may not be realistic. Moreover, existent within-cluster variance heterogeneity should be regarded as a source of additional information rather than a violation of a model assumption. This study…
Descriptors: Bayesian Statistics, Hierarchical Linear Modeling, Item Response Theory, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Stella Yun; Lee, Won-Chan – Applied Measurement in Education, 2023
This study evaluates various scoring methods including number-correct scoring, IRT theta scoring, and hybrid scoring in terms of scale-score stability over time. A simulation study was conducted to examine the relative performance of five scoring methods in terms of preserving the first two moments of scale scores for a population in a chain of…
Descriptors: Scoring, Comparative Analysis, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Na Shan; Ping-Feng Xu – Journal of Educational and Behavioral Statistics, 2025
The detection of differential item functioning (DIF) is important in psychological and behavioral sciences. Standard DIF detection methods perform an item-by-item test iteratively, often assuming that all items except the one under investigation are DIF-free. This article proposes a Bayesian adaptive Lasso method to detect DIF in graded response…
Descriptors: Bayesian Statistics, Item Response Theory, Adolescents, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Fujimoto, Ken A.; Neugebauer, Sabina R. – Educational and Psychological Measurement, 2020
Although item response theory (IRT) models such as the bifactor, two-tier, and between-item-dimensionality IRT models have been devised to confirm complex dimensional structures in educational and psychological data, they can be challenging to use in practice. The reason is that these models are multidimensional IRT (MIRT) models and thus are…
Descriptors: Bayesian Statistics, Item Response Theory, Sample Size, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xue; Tao, Jian; Wang, Chun; Shi, Ning-Zhong – Journal of Educational Measurement, 2019
Model selection is important in any statistical analysis, and the primary goal is to find the preferred (or most parsimonious) model, based on certain criteria, from a set of candidate models given data. Several recent publications have employed the deviance information criterion (DIC) to do model selection among different forms of multilevel item…
Descriptors: Bayesian Statistics, Item Response Theory, Measurement, Models
Zhang, Xue; Tao, Jian; Wang, Chun; Shi, Ning-Zhong – Grantee Submission, 2019
Model selection is important in any statistical analysis, and the primary goal is to find the preferred (or most parsimonious) model, based on certain criteria, from a set of candidate models given data. Several recent publications have employed the deviance information criterion (DIC) to do model selection among different forms of multilevel item…
Descriptors: Bayesian Statistics, Item Response Theory, Measurement, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Fujimoto, Ken A. – Educational and Psychological Measurement, 2019
Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the…
Descriptors: Bayesian Statistics, Item Response Theory, Psychometrics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Fujimoto, Ken A. – Journal of Educational Measurement, 2020
Multilevel bifactor item response theory (IRT) models are commonly used to account for features of the data that are related to the sampling and measurement processes used to gather those data. These models conventionally make assumptions about the portions of the data structure that represent these features. Unfortunately, when data violate these…
Descriptors: Bayesian Statistics, Item Response Theory, Achievement Tests, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Marcoulides, Katerina M. – Measurement: Interdisciplinary Research and Perspectives, 2018
This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…
Descriptors: Bayesian Statistics, Item Response Theory, Probability, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J. – Measurement: Interdisciplinary Research and Perspectives, 2018
Bayesian item response theory (IRT) modeling stages include (a) specifying the IRT likelihood model, (b) specifying the parameter prior distributions, (c) obtaining the posterior distribution, and (d) making appropriate inferences. The latter stage, and the focus of this research, includes model criticism. Choice of priors with the posterior…
Descriptors: Bayesian Statistics, Item Response Theory, Statistical Inference, Prediction
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6