Publication Date
In 2025 | 2 |
Since 2024 | 11 |
Descriptor
Goodness of Fit | 11 |
Sample Size | 11 |
Error of Measurement | 9 |
Factor Analysis | 7 |
Accuracy | 4 |
Structural Equation Models | 4 |
Classification | 3 |
Factor Structure | 3 |
Item Response Theory | 3 |
Models | 3 |
Monte Carlo Methods | 3 |
More ▼ |
Source
Educational and Psychological… | 5 |
Journal of Experimental… | 2 |
Structural Equation Modeling:… | 2 |
International Journal of… | 1 |
ProQuest LLC | 1 |
Author
Chunhua Cao | 2 |
Allan S. Cohen | 1 |
Benjamin Lugu | 1 |
Bo Zhang | 1 |
Christopher E. Shank | 1 |
David Goretzko | 1 |
Dexin Shi | 1 |
Dubravka Svetina Valdivia | 1 |
Fatih Orçan | 1 |
Fei Gu | 1 |
Frank Nelson | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Research | 10 |
Dissertations/Theses -… | 1 |
Information Analyses | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Dexin Shi; Bo Zhang; Ren Liu; Zhehan Jiang – Educational and Psychological Measurement, 2024
Multiple imputation (MI) is one of the recommended techniques for handling missing data in ordinal factor analysis models. However, methods for computing MI-based fit indices under ordinal factor analysis models have yet to be developed. In this short note, we introduced the methods of using the standardized root mean squared residual (SRMR) and…
Descriptors: Goodness of Fit, Factor Analysis, Simulation, Accuracy
Dubravka Svetina Valdivia; Shenghai Dai – Journal of Experimental Education, 2024
Applications of polytomous IRT models in applied fields (e.g., health, education, psychology) are abound. However, little is known about the impact of the number of categories and sample size requirements for precise parameter recovery. In a simulation study, we investigated the impact of the number of response categories and required sample size…
Descriptors: Item Response Theory, Sample Size, Models, Classification
Suppanut Sriutaisuk; Yu Liu; Seungwon Chung; Hanjoe Kim; Fei Gu – Educational and Psychological Measurement, 2025
The multiple imputation two-stage (MI2S) approach holds promise for evaluating the model fit of structural equation models for ordinal variables with multiply imputed data. However, previous studies only examined the performance of MI2S-based residual-based test statistics. This study extends previous research by examining the performance of two…
Descriptors: Structural Equation Models, Error of Measurement, Programming Languages, Goodness of Fit
Chunhua Cao; Xinya Liang – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Cross-loadings are common in multiple-factor confirmatory factor analysis (CFA) but often ignored in measurement invariance testing. This study examined the impact of ignoring cross-loadings on the sensitivity of fit measures (CFI, RMSEA, SRMR, SRMRu, AIC, BIC, SaBIC, LRT) to measurement noninvariance. The manipulated design factors included the…
Descriptors: Goodness of Fit, Error of Measurement, Sample Size, Factor Analysis
David Goretzko; Karik Siemund; Philipp Sterner – Educational and Psychological Measurement, 2024
Confirmatory factor analyses (CFA) are often used in psychological research when developing measurement models for psychological constructs. Evaluating CFA model fit can be quite challenging, as tests for exact model fit may focus on negligible deviances, while fit indices cannot be interpreted absolutely without specifying thresholds or cutoffs.…
Descriptors: Factor Analysis, Goodness of Fit, Psychological Studies, Measurement
Ting Dai; Yang Du; Jennifer Cromley; Tia Fechter; Frank Nelson – Journal of Experimental Education, 2024
Simple matrix sampling planned missing (SMS PD) design, introduce missing data patterns that lead to covariances between variables that are not jointly observed, and create difficulties for analyses other than mean and variance estimations. Based on prior research, we adopted a new multigroup confirmatory factor analysis (CFA) approach to handle…
Descriptors: Research Problems, Research Design, Data, Matrices
Christopher E. Shank – ProQuest LLC, 2024
This dissertation compares the performance of equivalence test (EQT) and null hypothesis test (NHT) procedures for identifying invariant and noninvariant factor loadings under a range of experimental manipulations. EQT is the statistically appropriate approach when the research goal is to find evidence of group similarity rather than group…
Descriptors: Factor Analysis, Goodness of Fit, Intervals, Comparative Analysis
Chunhua Cao; Benjamin Lugu; Jujia Li – Structural Equation Modeling: A Multidisciplinary Journal, 2024
This study examined the false positive (FP) rates and sensitivity of Bayesian fit indices to structural misspecification in Bayesian structural equation modeling. The impact of measurement quality, sample size, model size, the magnitude of misspecified path effect, and the choice or prior on the performance of the fit indices was also…
Descriptors: Structural Equation Models, Bayesian Statistics, Measurement, Error of Measurement
Hyunjung Lee; Heining Cham – Educational and Psychological Measurement, 2024
Determining the number of factors in exploratory factor analysis (EFA) is crucial because it affects the rest of the analysis and the conclusions of the study. Researchers have developed various methods for deciding the number of factors to retain in EFA, but this remains one of the most difficult decisions in the EFA. The purpose of this study is…
Descriptors: Factor Structure, Factor Analysis, Monte Carlo Methods, Goodness of Fit
Fatih Orçan – International Journal of Assessment Tools in Education, 2025
Factor analysis is a statistical method to explore the relationships among observed variables and identify latent structures. It is crucial in scale development and validity analysis. Key factors affecting the accuracy of factor analysis results include the type of data, sample size, and the number of response categories. While some studies…
Descriptors: Factor Analysis, Factor Structure, Item Response Theory, Sample Size
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification