Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 13 |
Descriptor
Source
Applied Psychological… | 24 |
Author
Cohen, Allan S. | 3 |
Kang, Taehoon | 2 |
Lewis, Charles | 2 |
Swaminathan, Hariharan | 2 |
Wang, Wen-Chung | 2 |
Baker, Frank B. | 1 |
Bradlow, Eric T. | 1 |
Chen, Po-Hsi | 1 |
Chernyshenko, Oleksandr S. | 1 |
Cho, Sun-Joo | 1 |
Dai, Yunyun | 1 |
More ▼ |
Publication Type
Journal Articles | 24 |
Reports - Evaluative | 16 |
Reports - Research | 7 |
Education Level
Elementary Education | 1 |
Grade 3 | 1 |
Audience
Location
Florida | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Armed Services Vocational… | 1 |
Florida Comprehensive… | 1 |
What Works Clearinghouse Rating
DeCarlo, Lawrence T. – Applied Psychological Measurement, 2012
In the typical application of a cognitive diagnosis model, the Q-matrix, which reflects the theory with respect to the skills indicated by the items, is assumed to be known. However, the Q-matrix is usually determined by expert judgment, and so there can be uncertainty about some of its elements. Here it is shown that this uncertainty can be…
Descriptors: Bayesian Statistics, Item Response Theory, Simulation, Models
Dai, Yunyun – Applied Psychological Measurement, 2013
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Descriptors: Item Response Theory, Test Bias, Computation, Bayesian Statistics
DeMars, Christine E. – Applied Psychological Measurement, 2012
A testlet is a cluster of items that share a common passage, scenario, or other context. These items might measure something in common beyond the trait measured by the test as a whole; if so, the model for the item responses should allow for this testlet trait. But modeling testlet effects that are negligible makes the model unnecessarily…
Descriptors: Test Items, Item Response Theory, Comparative Analysis, Models
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien – Applied Psychological Measurement, 2013
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Bayesian Statistics
Magis, David; Raiche, Gilles – Applied Psychological Measurement, 2010
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
Descriptors: Maximum Likelihood Statistics, Computation, Bayesian Statistics, Item Response Theory
Kieftenbeld, Vincent; Natesan, Prathiba – Applied Psychological Measurement, 2012
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Descriptors: Test Length, Markov Processes, Item Response Theory, Monte Carlo Methods
Kang, Taehoon; Cohen, Allan S.; Sung, Hyun-Jung – Applied Psychological Measurement, 2009
This study examines the utility of four indices for use in model selection with nested and nonnested polytomous item response theory (IRT) models: a cross-validation index and three information-based indices. Four commonly used polytomous IRT models are considered: the graded response model, the generalized partial credit model, the partial credit…
Descriptors: Item Response Theory, Models, Selection, Simulation
Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo – Applied Psychological Measurement, 2009
This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…
Descriptors: Item Response Theory, Models, Selection, Methods
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length
van der Linden, Wim J. – Applied Psychological Measurement, 2009
An adaptive testing method is presented that controls the speededness of a test using predictions of the test takers' response times on the candidate items in the pool. Two different types of predictions are investigated: posterior predictions given the actual response times on the items already administered and posterior predictions that use the…
Descriptors: Simulation, Adaptive Testing, Vocational Aptitude, Bayesian Statistics
de la Torre, Jimmy; Song, Hao – Applied Psychological Measurement, 2009
Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…
Descriptors: Ability, Tests, Item Response Theory, Data Analysis
Kang, Taehoon; Cohen, Allan S. – Applied Psychological Measurement, 2007
Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…
Descriptors: Simulation, Item Response Theory, Comparative Analysis, Bayesian Statistics
Hoshino, Takahiro; Shigemasu, Kazuo – Applied Psychological Measurement, 2008
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Descriptors: Monte Carlo Methods, Markov Processes, Factor Analysis, Computation

Swaminathan, Hariharan; Hambleton, Ronald K.; Sireci, Stephen G.; Xing, Dehui; Rizavi, Saba M. – Applied Psychological Measurement, 2003
Descriptors: Bayesian Statistics, Estimation (Mathematics), Item Response Theory, Sample Size

McLeod, Lori; Lewis, Charles; Thissen, David – Applied Psychological Measurement, 2003
Explored procedures to detect test takers using item preknowledge in computerized adaptive testing and suggested a Bayesian posterior log odds ratio index for this purpose. Simulation results support the use of the odds ratio index. (SLD)
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Knowledge Level
Previous Page | Next Page ยป
Pages: 1 | 2