Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 10 |
Descriptor
Bayesian Statistics | 12 |
Markov Processes | 12 |
Monte Carlo Methods | 11 |
Computation | 9 |
Item Response Theory | 9 |
Computer Software | 5 |
Models | 5 |
Simulation | 5 |
Correlation | 2 |
Data Analysis | 2 |
Error of Measurement | 2 |
More ▼ |
Source
Applied Psychological… | 12 |
Author
Shigemasu, Kazuo | 2 |
de la Torre, Jimmy | 2 |
Babcock, Ben | 1 |
Bradlow, Eric T. | 1 |
Chen, Po-Hsi | 1 |
Dai, Yunyun | 1 |
DeCarlo, Lawrence T. | 1 |
Hong, Yuan | 1 |
Hoshino, Takahiro | 1 |
Huang, Hung-Yu | 1 |
Johnson, Matthew S. | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Evaluative | 5 |
Reports - Research | 5 |
Reports - Descriptive | 2 |
Education Level
Higher Education | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Dai, Yunyun – Applied Psychological Measurement, 2013
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Descriptors: Item Response Theory, Test Bias, Computation, Bayesian Statistics
Johnson, Timothy R. – Applied Psychological Measurement, 2013
One of the distinctions between classical test theory and item response theory is that the former focuses on sum scores and their relationship to true scores, whereas the latter concerns item responses and their relationship to latent scores. Although item response theory is often viewed as the richer of the two theories, sum scores are still…
Descriptors: Item Response Theory, Scores, Computation, Bayesian Statistics
Babcock, Ben – Applied Psychological Measurement, 2011
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
Descriptors: Item Response Theory, Sampling, Computation, Statistical Analysis
Huang, Hung-Yu; Wang, Wen-Chung; Chen, Po-Hsi; Su, Chi-Ming – Applied Psychological Measurement, 2013
Many latent traits in the human sciences have a hierarchical structure. This study aimed to develop a new class of higher order item response theory models for hierarchical latent traits that are flexible in accommodating both dichotomous and polytomous items, to estimate both item and person parameters jointly, to allow users to specify…
Descriptors: Item Response Theory, Models, Vertical Organization, Bayesian Statistics
Kieftenbeld, Vincent; Natesan, Prathiba – Applied Psychological Measurement, 2012
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Descriptors: Test Length, Markov Processes, Item Response Theory, Monte Carlo Methods
DeCarlo, Lawrence T. – Applied Psychological Measurement, 2011
Cognitive diagnostic models (CDMs) attempt to uncover latent skills or attributes that examinees must possess in order to answer test items correctly. The DINA (deterministic input, noisy "and") model is a popular CDM that has been widely used. It is shown here that a logistic version of the model can easily be fit with standard software for…
Descriptors: Bayesian Statistics, Computation, Cognitive Tests, Diagnostic Tests
de la Torre, Jimmy; Hong, Yuan – Applied Psychological Measurement, 2010
Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…
Descriptors: Sample Size, Markov Processes, Tests, Data Analysis
Okada, Kensuke; Shigemasu, Kazuo – Applied Psychological Measurement, 2009
Bayesian multidimensional scaling (MDS) has attracted a great deal of attention because: (1) it provides a better fit than do classical MDS and ALSCAL; (2) it provides estimation errors of the distances; and (3) the Bayesian dimension selection criterion, MDSIC, provides a direct indication of optimal dimensionality. However, Bayesian MDS is not…
Descriptors: Bayesian Statistics, Multidimensional Scaling, Computation, Computer Software
de la Torre, Jimmy; Song, Hao – Applied Psychological Measurement, 2009
Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…
Descriptors: Ability, Tests, Item Response Theory, Data Analysis
Hoshino, Takahiro; Shigemasu, Kazuo – Applied Psychological Measurement, 2008
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Descriptors: Monte Carlo Methods, Markov Processes, Factor Analysis, Computation
Johnson, Matthew S.; Sinharay, Sandip – Applied Psychological Measurement, 2005
For complex educational assessments, there is an increasing use of item families, which are groups of related items. Calibration or scoring in an assessment involving item families requires models that can take into account the dependence structure inherent among the items that belong to the same item family. This article extends earlier works in…
Descriptors: National Competency Tests, Markov Processes, Bayesian Statistics

Wang, Xiaohui; Bradlow, Eric T.; Wainer, Howard – Applied Psychological Measurement, 2002
Proposes a modified version of commonly employed item response models in a fully Bayesian framework and obtains inferences under the model using Markov chain Monte Carlo techniques. Demonstrates use of the model in a series of simulations and with operational data from the North Carolina Test of Computer Skills and the Test of Spoken English…
Descriptors: Bayesian Statistics, Item Response Theory, Markov Processes, Mathematical Models