Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 37 |
Since 2006 (last 20 years) | 93 |
Descriptor
Item Response Theory | 118 |
Markov Processes | 118 |
Monte Carlo Methods | 100 |
Bayesian Statistics | 60 |
Models | 56 |
Computation | 45 |
Test Items | 30 |
Simulation | 25 |
Comparative Analysis | 22 |
Foreign Countries | 17 |
Correlation | 16 |
More ▼ |
Source
Author
de la Torre, Jimmy | 10 |
Jiao, Hong | 5 |
Fox, Jean-Paul | 4 |
Patz, Richard J. | 4 |
Wang, Wen-Chung | 4 |
Yao, Lihua | 4 |
van der Linden, Wim J. | 4 |
Cohen, Allan S. | 3 |
Douglas, Jeffrey A. | 3 |
Glas, Cees A. W. | 3 |
Huang, Hung-Yu | 3 |
More ▼ |
Publication Type
Journal Articles | 96 |
Reports - Research | 75 |
Reports - Evaluative | 23 |
Reports - Descriptive | 10 |
Dissertations/Theses -… | 8 |
Speeches/Meeting Papers | 4 |
Collected Works - Proceedings | 2 |
Numerical/Quantitative Data | 2 |
Education Level
Audience
Researchers | 2 |
Students | 1 |
Location
Taiwan | 5 |
Hong Kong | 2 |
North Carolina | 2 |
Singapore | 2 |
Afghanistan | 1 |
Australia | 1 |
Austria | 1 |
Botswana | 1 |
Brazil | 1 |
Canada | 1 |
Chile | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Zhichen Guo; Daxun Wang; Yan Cai; Dongbo Tu – Educational and Psychological Measurement, 2024
Forced-choice (FC) measures have been widely used in many personality or attitude tests as an alternative to rating scales, which employ comparative rather than absolute judgments. Several response biases, such as social desirability, response styles, and acquiescence bias, can be reduced effectively. Another type of data linked with comparative…
Descriptors: Item Response Theory, Models, Reaction Time, Measurement Techniques
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Esther Ulitzsch; Steffi Pohl; Lale Khorramdel; Ulf Kroehne; Matthias von Davier – Journal of Educational and Behavioral Statistics, 2024
Questionnaires are by far the most common tool for measuring noncognitive constructs in psychology and educational sciences. Response bias may pose an additional source of variation between respondents that threatens validity of conclusions drawn from questionnaire data. We present a mixture modeling approach that leverages response time data from…
Descriptors: Item Response Theory, Response Style (Tests), Questionnaires, Secondary School Students
Yu, Albert; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2023
We propose a new item response theory growth model with item-specific learning parameters, or ISLP, and two variations of this model. In the ISLP model, either items or blocks of items have their own learning parameters. This model may be used to improve the efficiency of learning in a formative assessment. We show ways that the ISLP model's…
Descriptors: Item Response Theory, Learning, Markov Processes, Monte Carlo Methods
Fox, Jean-Paul; Wenzel, Jeremias; Klotzke, Konrad – Journal of Educational and Behavioral Statistics, 2021
Standard item response theory (IRT) models have been extended with testlet effects to account for the nesting of items; these are well known as (Bayesian) testlet models or random effect models for testlets. The testlet modeling framework has several disadvantages. A sufficient number of testlet items are needed to estimate testlet effects, and a…
Descriptors: Bayesian Statistics, Tests, Item Response Theory, Hierarchical Linear Modeling
Jang, Yoonsun; Cohen, Allan S. – Educational and Psychological Measurement, 2020
A nonconverged Markov chain can potentially lead to invalid inferences about model parameters. The purpose of this study was to assess the effect of a nonconverged Markov chain on the estimation of parameters for mixture item response theory models using a Markov chain Monte Carlo algorithm. A simulation study was conducted to investigate the…
Descriptors: Markov Processes, Item Response Theory, Accuracy, Inferences
Ames, Allison J.; Myers, Aaron J. – Educational and Psychological Measurement, 2021
Contamination of responses due to extreme and midpoint response style can confound the interpretation of scores, threatening the validity of inferences made from survey responses. This study incorporated person-level covariates in the multidimensional item response tree model to explain heterogeneity in response style. We include an empirical…
Descriptors: Response Style (Tests), Item Response Theory, Longitudinal Studies, Adolescents
van der Linden, Wim J.; Ren, Hao – Journal of Educational and Behavioral Statistics, 2020
The Bayesian way of accounting for the effects of error in the ability and item parameters in adaptive testing is through the joint posterior distribution of all parameters. An optimized Markov chain Monte Carlo algorithm for adaptive testing is presented, which samples this distribution in real time to score the examinee's ability and optimally…
Descriptors: Bayesian Statistics, Adaptive Testing, Error of Measurement, Markov Processes
Li, Xiao; Xu, Hanchen; Zhang, Jinming; Chang, Hua-hua – Journal of Educational and Behavioral Statistics, 2023
The adaptive learning problem concerns how to create an individualized learning plan (also referred to as a learning policy) that chooses the most appropriate learning materials based on a learner's latent traits. In this article, we study an important yet less-addressed adaptive learning problem--one that assumes continuous latent traits.…
Descriptors: Learning Processes, Models, Algorithms, Individualized Instruction
Liu, Yang; Wang, Xiaojing – Journal of Educational and Behavioral Statistics, 2020
Parametric methods, such as autoregressive models or latent growth modeling, are usually inflexible to model the dependence and nonlinear effects among the changes of latent traits whenever the time gap is irregular and the recorded time points are individually varying. Often in practice, the growth trend of latent traits is subject to certain…
Descriptors: Bayesian Statistics, Nonparametric Statistics, Regression (Statistics), Item Response Theory
Levy, Roy – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Roy Levy describes Bayesian approaches to psychometric modeling. He discusses how Bayesian inference is a mechanism for reasoning in a probability-modeling framework and is well-suited to core problems in educational measurement: reasoning from student performances on an assessment to make inferences about their…
Descriptors: Bayesian Statistics, Psychometrics, Item Response Theory, Statistical Inference
Babcock, Ben; Hodge, Kari J. – Educational and Psychological Measurement, 2020
Equating and scaling in the context of small sample exams, such as credentialing exams for highly specialized professions, has received increased attention in recent research. Investigators have proposed a variety of both classical and Rasch-based approaches to the problem. This study attempts to extend past research by (1) directly comparing…
Descriptors: Item Response Theory, Equated Scores, Scaling, Sample Size
da Silva, Marcelo A.; Liu, Ren; Huggins-Manley, Anne C.; Bazán, Jorge L. – Educational and Psychological Measurement, 2019
Multidimensional item response theory (MIRT) models use data from individual item responses to estimate multiple latent traits of interest, making them useful in educational and psychological measurement, among other areas. When MIRT models are applied in practice, it is not uncommon to see that some items are designed to measure all latent traits…
Descriptors: Item Response Theory, Matrices, Models, Bayesian Statistics
Luo, Yong; Liang, Xinya – Measurement: Interdisciplinary Research and Perspectives, 2019
Current methods that simultaneously model differential testlet functioning (DTLF) and differential item functioning (DIF) constrain the variances of latent ability and testlet effects to be equal between the focal and the reference groups. Such a constraint can be stringent and unrealistic with real data. In this study, we propose a multigroup…
Descriptors: Test Items, Item Response Theory, Test Bias, Models
Qiao, Xin; Jiao, Hong – Journal of Educational Measurement, 2021
This study proposes explanatory cognitive diagnostic model (CDM) jointly incorporating responses and response times (RTs) with the inclusion of item covariates related to both item responses and RTs. The joint modeling of item responses and RTs intends to provide more information for cognitive diagnosis while item covariates can be used to predict…
Descriptors: Cognitive Measurement, Models, Reaction Time, Test Items