Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 14 |
Descriptor
Data Analysis | 17 |
Item Response Theory | 17 |
Monte Carlo Methods | 17 |
Bayesian Statistics | 8 |
Models | 8 |
Simulation | 8 |
Markov Processes | 7 |
Computation | 6 |
Correlation | 6 |
Goodness of Fit | 5 |
Evaluation Methods | 4 |
More ▼ |
Source
Author
de la Torre, Jimmy | 3 |
Armstrong, Ronald D. | 1 |
Belov, Dmitry I. | 1 |
Cai, Li | 1 |
Conijn, Judith M. | 1 |
Dardick, William R. | 1 |
Davis, Richard L. | 1 |
Domingue, Benjamin W. | 1 |
Emons, Wilco H. M. | 1 |
Finch, Holmes | 1 |
Goodman, Noah | 1 |
More ▼ |
Publication Type
Reports - Research | 15 |
Journal Articles | 14 |
Speeches/Meeting Papers | 2 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
MacArthur Communicative… | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Xiaying Zheng; Ji Seung Yang; Jeffrey R. Harring – Structural Equation Modeling: A Multidisciplinary Journal, 2022
Measuring change in an educational or psychological construct over time is often achieved by repeatedly administering the same items to the same examinees over time and fitting a second-order latent growth curve model. However, latent growth modeling with full information maximum likelihood (FIML) estimation becomes computationally challenging…
Descriptors: Longitudinal Studies, Data Analysis, Item Response Theory, Structural Equation Models
Zopluoglu, Cengiz – Educational and Psychological Measurement, 2020
A mixture extension of Samejima's continuous response model for continuous measurement outcomes and its estimation through a heuristic approach based on limited-information factor analysis is introduced. Using an empirical data set, it is shown that two groups of respondents that differ both qualitatively and quantitatively in their response…
Descriptors: Item Response Theory, Measurement, Models, Heuristics
Pan, Tianshu; Yin, Yue – Applied Measurement in Education, 2017
In this article, we propose using the Bayes factors (BF) to evaluate person fit in item response theory models under the framework of Bayesian evaluation of an informative diagnostic hypothesis. We first discuss the theoretical foundation for this application and how to analyze person fit using BF. To demonstrate the feasibility of this approach,…
Descriptors: Bayesian Statistics, Goodness of Fit, Item Response Theory, Monte Carlo Methods
Lee, Soo; Suh, Youngsuk – Journal of Educational Measurement, 2018
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Descriptors: Item Response Theory, Sample Size, Models, Error of Measurement
Dardick, William R.; Mislevy, Robert J. – Educational and Psychological Measurement, 2016
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Descriptors: Bayesian Statistics, Probability, Data Analysis, Item Response Theory
Wu, Mike; Davis, Richard L.; Domingue, Benjamin W.; Piech, Chris; Goodman, Noah – International Educational Data Mining Society, 2020
Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology. Large modern datasets offer opportunities to capture more nuances in human behavior, potentially improving test scoring and better informing public policy. Yet larger…
Descriptors: Item Response Theory, Accuracy, Data Analysis, Public Policy
Monroe, Scott; Cai, Li – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2013
In Ramsay curve item response theory (RC-IRT, Woods & Thissen, 2006) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's (1981) EM algorithm, which yields maximum marginal likelihood estimates. This method, however,…
Descriptors: Item Response Theory, Maximum Likelihood Statistics, Statistical Inference, Models
de la Torre, Jimmy; Hong, Yuan – Applied Psychological Measurement, 2010
Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…
Descriptors: Sample Size, Markov Processes, Tests, Data Analysis
Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas – Multivariate Behavioral Research, 2011
The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…
Descriptors: Monte Carlo Methods, Patients, Probability, Item Response Theory
Wang, Shudong; Jiao, Hong; Jin, Ying; Thum, Yeow Meng – Online Submission, 2010
The vertical scales of large-scale achievement tests created by using item response theory (IRT) models are mostly based on cluster (or correlated) educational data in which students usually are clustered in certain groups or settings (classrooms or schools). While such application directly violated assumption of independent sample of person in…
Descriptors: Scaling, Achievement Tests, Data Analysis, Item Response Theory
de la Torre, Jimmy; Song, Hao – Applied Psychological Measurement, 2009
Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…
Descriptors: Ability, Tests, Item Response Theory, Data Analysis
Finch, Holmes; Monahan, Patrick – Applied Measurement in Education, 2008
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
Descriptors: Monte Carlo Methods, Factor Analysis, Generalization, Methods
de la Torre, Jimmy – Applied Psychological Measurement, 2008
Recent work has shown that multidimensionally scoring responses from different tests can provide better ability estimates. For educational assessment data, applications of this approach have been limited to binary scores. Of the different variants, the de la Torre and Patz model is considered more general because implementing the scoring procedure…
Descriptors: Markov Processes, Scoring, Data Analysis, Item Response Theory
Segawa, Eisuke – Journal of Educational and Behavioral Statistics, 2005
Multi-indicator growth models were formulated as special three-level hierarchical generalized linear models to analyze growth of a trait latent variable measured by ordinal items. Items are nested within a time-point, and time-points are nested within subject. These models are special because they include factor analytic structure. This model can…
Descriptors: Bayesian Statistics, Mathematical Models, Factor Analysis, Computer Simulation
Belov, Dmitry I.; Armstrong, Ronald D. – Applied Psychological Measurement, 2005
A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…
Descriptors: Item Banks, Computer Assisted Testing, Monte Carlo Methods, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1 | 2