Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 8 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 8 |
Descriptor
Algorithms | 8 |
Item Response Theory | 8 |
Bayesian Statistics | 3 |
Classification | 2 |
Evaluation Methods | 2 |
Learning Processes | 2 |
Markov Processes | 2 |
Models | 2 |
Sample Size | 2 |
Test Format | 2 |
Test Reliability | 2 |
More ▼ |
Source
Journal of Educational and… | 8 |
Author
Chang, Hua-hua | 1 |
Chiu, Chia-Yi | 1 |
Daniel J. Bauer | 1 |
Harring, Jeffrey R. | 1 |
Jean-Paul Fox | 1 |
Köhn, Hans Friedrich | 1 |
Lee, Daniel Y. | 1 |
Li Cai | 1 |
Li, Xiao | 1 |
Luping Niu | 1 |
Paciorek, Christopher J. | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 7 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
William C. M. Belzak; Daniel J. Bauer – Journal of Educational and Behavioral Statistics, 2024
Testing for differential item functioning (DIF) has undergone rapid statistical developments recently. Moderated nonlinear factor analysis (MNLFA) allows for simultaneous testing of DIF among multiple categorical and continuous covariates (e.g., sex, age, ethnicity, etc.), and regularization has shown promising results for identifying DIF among…
Descriptors: Test Bias, Algorithms, Factor Analysis, Error of Measurement
Paganin, Sally; Paciorek, Christopher J.; Wehrhahn, Claudia; Rodríguez, Abel; Rabe-Hesketh, Sophia; de Valpine, Perry – Journal of Educational and Behavioral Statistics, 2023
Item response theory (IRT) models typically rely on a normality assumption for subject-specific latent traits, which is often unrealistic in practice. Semiparametric extensions based on Dirichlet process mixtures (DPMs) offer a more flexible representation of the unknown distribution of the latent trait. However, the use of such models in the IRT…
Descriptors: Bayesian Statistics, Item Response Theory, Guidance, Evaluation Methods
Cross-Classified Item Response Theory Modeling with an Application to Student Evaluation of Teaching
Sijia Huang; Li Cai – Journal of Educational and Behavioral Statistics, 2024
The cross-classified data structure is ubiquitous in education, psychology, and health outcome sciences. In these areas, assessment instruments that are made up of multiple items are frequently used to measure latent constructs. The presence of both the cross-classified structure and multivariate categorical outcomes leads to the so-called…
Descriptors: Classification, Data Collection, Data Analysis, Item Response Theory
Lee, Daniel Y.; Harring, Jeffrey R. – Journal of Educational and Behavioral Statistics, 2023
A Monte Carlo simulation was performed to compare methods for handling missing data in growth mixture models. The methods considered in the current study were (a) a fully Bayesian approach using a Gibbs sampler, (b) full information maximum likelihood using the expectation-maximization algorithm, (c) multiple imputation, (d) a two-stage multiple…
Descriptors: Monte Carlo Methods, Research Problems, Statistical Inference, Bayesian Statistics
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Wang, Yu; Chiu, Chia-Yi; Köhn, Hans Friedrich – Journal of Educational and Behavioral Statistics, 2023
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Nonparametric Statistics, Test Format, Educational Assessment
Li, Xiao; Xu, Hanchen; Zhang, Jinming; Chang, Hua-hua – Journal of Educational and Behavioral Statistics, 2023
The adaptive learning problem concerns how to create an individualized learning plan (also referred to as a learning policy) that chooses the most appropriate learning materials based on a learner's latent traits. In this article, we study an important yet less-addressed adaptive learning problem--one that assumes continuous latent traits.…
Descriptors: Learning Processes, Models, Algorithms, Individualized Instruction