NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Timothy R. – Applied Psychological Measurement, 2013
One of the distinctions between classical test theory and item response theory is that the former focuses on sum scores and their relationship to true scores, whereas the latter concerns item responses and their relationship to latent scores. Although item response theory is often viewed as the richer of the two theories, sum scores are still…
Descriptors: Item Response Theory, Scores, Computation, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; Beland, Sebastien; Raiche, Gilles – Applied Psychological Measurement, 2011
In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…
Descriptors: Test Length, Computation, Item Response Theory, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; Raiche, Gilles – Applied Psychological Measurement, 2010
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
Descriptors: Maximum Likelihood Statistics, Computation, Bayesian Statistics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kieftenbeld, Vincent; Natesan, Prathiba – Applied Psychological Measurement, 2012
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Descriptors: Test Length, Markov Processes, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Meyer, J. Patrick – Applied Psychological Measurement, 2010
An examinee faced with a test item will engage in solution behavior or rapid-guessing behavior. These qualitatively different test-taking behaviors bias parameter estimates for item response models that do not control for such behavior. A mixture Rasch model with item response time components was proposed and evaluated through application to real…
Descriptors: Item Response Theory, Response Style (Tests), Reaction Time, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length
Peer reviewed Peer reviewed
Skaggs, Gary; Stevenson, Jose – Applied Psychological Measurement, 1989
Pseudo-Bayesian and joint maximum likelihood procedures were compared for their ability to estimate item parameters for item response theory's (IRT's) three-parameter logistic model. Item responses were generated for sample sizes of 2,000 and 500; test lengths of 35 and 15; and examinees of high, medium, and low ability. (TJH)
Descriptors: Bayesian Statistics, Comparative Analysis, Computer Software, Estimation (Mathematics)
Peer reviewed Peer reviewed
Nicewander, W. Alan; Thomasson, Gary L. – Applied Psychological Measurement, 1999
Derives three reliability estimates for the Bayes modal estimate (BME) and the maximum-likelihood estimate (MLE) of theta in computerized adaptive tests (CATs). Computes the three reliability estimates and the true reliabilities of both BME and MLE for seven simulated CATs. Results show the true reliabilities for BME and MLE to be nearly identical…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Wang, Tianyou; Hanson, Bradley A.; Lau, Che-Ming A. – Applied Psychological Measurement, 1999
Extended the use of a beta prior in trait estimation to the maximum expected a posteriori (MAP) method of Bayesian estimation. This new method, essentially unbiased MAP, was compared with MAP, essentially unbiased expected a posteriori, weighted likelihood, and maximum-likelihood estimation methods. The new method significantly reduced bias in…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Bock, R. Darrell; And Others – Applied Psychological Measurement, 1988
A method of item factor analysis is described, which is based on Thurstone's multiple-factor model and implemented by marginal maximum likelihood estimation and the EM algorithm. Also assessed are the statistical significance of successive factors added to the model, provisions for guessing and omitted items, and Bayes constraints. (TJH)
Descriptors: Algorithms, Bayesian Statistics, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Harwell, Michael R.; Baker, Frank B. – Applied Psychological Measurement, 1991
Previous work on the mathematical and implementation details of the marginalized maximum likelihood estimation procedure is extended to encompass the marginalized Bayesian procedure for estimating item parameters of R. J. Mislevy (1986) and to communicate this procedure to users of the BILOG computer program. (SLD)
Descriptors: Bayesian Statistics, Equations (Mathematics), Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Kim, Seock-Ho; And Others – Applied Psychological Measurement, 1994
Type I error rates of F. M. Lord's chi square test for differential item functioning were investigated using Monte Carlo simulations with marginal maximum likelihood estimation and marginal Bayesian estimation algorithms. Lord's chi square did not provide useful Type I error control for the three-parameter logistic model at these sample sizes.…
Descriptors: Algorithms, Bayesian Statistics, Chi Square, Error of Measurement
Peer reviewed Peer reviewed
Gifford, Janice A.; Swaminathan, Hariharan – Applied Psychological Measurement, 1990
The effects of priors and amount of bias in the Bayesian approach to the estimation problem in item response models are examined using simulation studies. Different specifications of prior information have only modest effects on Bayesian estimates, which are less biased than joint maximum likelihood estimates for small samples. (TJH)
Descriptors: Bayesian Statistics, Comparative Analysis, Computer Simulation, Estimation (Mathematics)