Publication Date
In 2025 | 0 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 12 |
Since 2016 (last 10 years) | 28 |
Since 2006 (last 20 years) | 92 |
Descriptor
Factor Analysis | 104 |
Item Response Theory | 104 |
Models | 104 |
Correlation | 26 |
Test Items | 24 |
Comparative Analysis | 22 |
Goodness of Fit | 21 |
Evaluation Methods | 20 |
Computation | 18 |
Simulation | 18 |
Foreign Countries | 17 |
More ▼ |
Source
Author
Ferrando, Pere J. | 6 |
Svetina, Dubravka | 4 |
Levy, Roy | 3 |
Maydeu-Olivares, Alberto | 3 |
Bauer, Daniel J. | 2 |
Cai, Li | 2 |
Edwards, Michael C. | 2 |
Finch, Holmes | 2 |
Seo, Dong Gi | 2 |
Wang, Wen-Chung | 2 |
Weiss, David J. | 2 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 2 |
Practitioners | 1 |
Students | 1 |
Location
Hong Kong | 2 |
Arizona | 1 |
Australia | 1 |
Austria | 1 |
Bahrain | 1 |
Chile | 1 |
China | 1 |
Finland | 1 |
France | 1 |
Germany | 1 |
Indonesia | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sooyong Lee; Suhwa Han; Seung W. Choi – Journal of Educational Measurement, 2024
Research has shown that multiple-indicator multiple-cause (MIMIC) models can result in inflated Type I error rates in detecting differential item functioning (DIF) when the assumption of equal latent variance is violated. This study explains how the violation of the equal variance assumption adversely impacts the detection of nonuniform DIF and…
Descriptors: Factor Analysis, Bayesian Statistics, Test Bias, Item Response Theory
Jochen Ranger; Christoph König; Benjamin W. Domingue; Jörg-Tobias Kuhn; Andreas Frey – Journal of Educational and Behavioral Statistics, 2024
In the existing multidimensional extensions of the log-normal response time (LNRT) model, the log response times are decomposed into a linear combination of several latent traits. These models are fully compensatory as low levels on traits can be counterbalanced by high levels on other traits. We propose an alternative multidimensional extension…
Descriptors: Models, Statistical Distributions, Item Response Theory, Response Rates (Questionnaires)
Hoang V. Nguyen; Niels G. Waller – Educational and Psychological Measurement, 2024
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter…
Descriptors: Monte Carlo Methods, Item Response Theory, Correlation, Error of Measurement
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Guo, Wenjing; Choi, Youn-Jeng – Educational and Psychological Measurement, 2023
Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Guidelines
Ferrando, Pere J.; Navarro-González, David – Educational and Psychological Measurement, 2021
Item response theory "dual" models (DMs) in which both items and individuals are viewed as sources of differential measurement error so far have been proposed only for unidimensional measures. This article proposes two multidimensional extensions of existing DMs: the M-DTCRM (dual Thurstonian continuous response model), intended for…
Descriptors: Item Response Theory, Error of Measurement, Models, Factor Analysis
Kim, Kyung Yong – Journal of Educational Measurement, 2020
New items are often evaluated prior to their operational use to obtain item response theory (IRT) item parameter estimates for quality control purposes. Fixed parameter calibration is one linking method that is widely used to estimate parameters for new items and place them on the desired scale. This article provides detailed descriptions of two…
Descriptors: Item Response Theory, Evaluation Methods, Test Items, Simulation
Pere J. Ferrando; Fabia Morales-Vives; Ana Hernández-Dorado – Educational and Psychological Measurement, 2024
In recent years, some models for binary and graded format responses have been proposed to assess unipolar variables or "quasi-traits." These studies have mainly focused on clinical variables that have traditionally been treated as bipolar traits. In the present study, we have made a proposal for unipolar traits measured with continuous…
Descriptors: Item Analysis, Goodness of Fit, Accuracy, Test Validity
Dakota W. Cintron – ProQuest LLC, 2020
Observable data in empirical social and behavioral science studies are often categorical (i.e., binary, ordinal, or nominal). When categorical data are outcomes, they fail to maintain the scale and distributional properties of linear regression and factor analysis. Attempting to estimate model parameters for categorical outcome data with the…
Descriptors: Factor Analysis, Computation, Statistics, Methods
Sarsa, Sami; Leinonen, Juho; Hellas, Arto – Journal of Educational Data Mining, 2022
New knowledge tracing models are continuously being proposed, even at a pace where state-of-the-art models cannot be compared with each other at the time of publication. This leads to a situation where ranking models is hard, and the underlying reasons of the models' performance -- be it architectural choices, hyperparameter tuning, performance…
Descriptors: Learning Processes, Artificial Intelligence, Intelligent Tutoring Systems, Memory
Chung, Seungwon; Houts, Carrie – Measurement: Interdisciplinary Research and Perspectives, 2020
Advanced modeling of item response data through the item response theory (IRT) or item factor analysis frameworks is becoming increasingly popular. In the social and behavioral sciences, the underlying structure of tests/assessments is often multidimensional (i.e., more than 1 latent variable/construct is represented in the items). This review…
Descriptors: Item Response Theory, Evaluation Methods, Models, Factor Analysis
Baris Pekmezci, Fulya; Gulleroglu, H. Deniz – Eurasian Journal of Educational Research, 2019
Purpose: This study aims to investigate the orthogonality assumption, which restricts the use of Bifactor item response theory under different conditions. Method: Data of the study have been obtained in accordance with the Bifactor model. It has been produced in accordance with two different models (Model 1 and Model 2) in a simulated way.…
Descriptors: Item Response Theory, Accuracy, Item Analysis, Correlation
Fager, Meghan L. – ProQuest LLC, 2019
Recent research in multidimensional item response theory has introduced within-item interaction effects between latent dimensions in the prediction of item responses. The objective of this study was to extend this research to bifactor models to include an interaction effect between the general and specific latent variables measured by an item.…
Descriptors: Test Items, Item Response Theory, Factor Analysis, Simulation
Cao, Mengyang; Song, Q. Chelsea; Tay, Louis – International Journal of Testing, 2018
There is a growing use of noncognitive assessments around the world, and recent research has posited an ideal point response process underlying such measures. A critical issue is whether the typical use of dominance approaches (e.g., average scores, factor analysis, and the Samejima's graded response model) in scoring such measures is adequate.…
Descriptors: Comparative Analysis, Item Response Theory, Factor Analysis, Models
Eaton, Philip; Willoughby, Shannon – Physical Review Physics Education Research, 2020
As targeted, single-conception curriculum research becomes more prevalent in physics education research (PER), the need for a more sophisticated statistical understanding of the conceptual surveys used becomes apparent. Previously, the factor structure of the Force Concept Inventory (FCI) was examined using exploratory factor analysis (EFA) and…
Descriptors: Item Response Theory, Factor Analysis, Factor Structure, Models