Publication Date
In 2025 | 12 |
Since 2024 | 40 |
Since 2021 (last 5 years) | 94 |
Since 2016 (last 10 years) | 160 |
Since 2006 (last 20 years) | 250 |
Descriptor
Error of Measurement | 308 |
Factor Analysis | 308 |
Foreign Countries | 87 |
Factor Structure | 76 |
Correlation | 74 |
Goodness of Fit | 70 |
Psychometrics | 59 |
Statistical Analysis | 55 |
Scores | 54 |
Comparative Analysis | 50 |
Structural Equation Models | 47 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Researchers | 7 |
Location
Germany | 8 |
Turkey | 8 |
United States | 7 |
Canada | 6 |
South Korea | 6 |
Netherlands | 5 |
China | 4 |
Iran | 4 |
Portugal | 4 |
Finland | 3 |
Singapore | 3 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Dexin Shi; Bo Zhang; Ren Liu; Zhehan Jiang – Educational and Psychological Measurement, 2024
Multiple imputation (MI) is one of the recommended techniques for handling missing data in ordinal factor analysis models. However, methods for computing MI-based fit indices under ordinal factor analysis models have yet to be developed. In this short note, we introduced the methods of using the standardized root mean squared residual (SRMR) and…
Descriptors: Goodness of Fit, Factor Analysis, Simulation, Accuracy
William C. M. Belzak; Daniel J. Bauer – Journal of Educational and Behavioral Statistics, 2024
Testing for differential item functioning (DIF) has undergone rapid statistical developments recently. Moderated nonlinear factor analysis (MNLFA) allows for simultaneous testing of DIF among multiple categorical and continuous covariates (e.g., sex, age, ethnicity, etc.), and regularization has shown promising results for identifying DIF among…
Descriptors: Test Bias, Algorithms, Factor Analysis, Error of Measurement
Timothy R. Konold; Elizabeth A. Sanders; Kelvin Afolabi – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Measurement invariance (MI) is an essential part of validity evidence concerned with ensuring that tests function similarly across groups, contexts, and time. Most evaluations of MI involve multigroup confirmatory factor analyses (MGCFA) that assume simple structure. However, recent research has shown that constraining non-target indicators to…
Descriptors: Evaluation Methods, Error of Measurement, Validity, Monte Carlo Methods
Chunhua Cao; Xinya Liang – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Cross-loadings are common in multiple-factor confirmatory factor analysis (CFA) but often ignored in measurement invariance testing. This study examined the impact of ignoring cross-loadings on the sensitivity of fit measures (CFI, RMSEA, SRMR, SRMRu, AIC, BIC, SaBIC, LRT) to measurement noninvariance. The manipulated design factors included the…
Descriptors: Goodness of Fit, Error of Measurement, Sample Size, Factor Analysis
Tenko Raykov – Structural Equation Modeling: A Multidisciplinary Journal, 2024
This note demonstrates that measurement invariance does not guarantee meaningful and valid group comparisons in multiple-population settings. The article follows on a recent critical discussion by Robitzsch and Lüdtke, who argued that measurement invariance was not a pre-requisite for such comparisons. Within the framework of common factor…
Descriptors: Error of Measurement, Prerequisites, Factor Analysis, Evaluation Methods
David Goretzko; Karik Siemund; Philipp Sterner – Educational and Psychological Measurement, 2024
Confirmatory factor analyses (CFA) are often used in psychological research when developing measurement models for psychological constructs. Evaluating CFA model fit can be quite challenging, as tests for exact model fit may focus on negligible deviances, while fit indices cannot be interpreted absolutely without specifying thresholds or cutoffs.…
Descriptors: Factor Analysis, Goodness of Fit, Psychological Studies, Measurement
Philipp Sterner; Kim De Roover; David Goretzko – Structural Equation Modeling: A Multidisciplinary Journal, 2025
When comparing relations and means of latent variables, it is important to establish measurement invariance (MI). Most methods to assess MI are based on confirmatory factor analysis (CFA). Recently, new methods have been developed based on exploratory factor analysis (EFA); most notably, as extensions of multi-group EFA, researchers introduced…
Descriptors: Error of Measurement, Measurement Techniques, Factor Analysis, Structural Equation Models
Stephanie M. Bell; R. Philip Chalmers; David B. Flora – Educational and Psychological Measurement, 2024
Coefficient omega indices are model-based composite reliability estimates that have become increasingly popular. A coefficient omega index estimates how reliably an observed composite score measures a target construct as represented by a factor in a factor-analysis model; as such, the accuracy of omega estimates is likely to depend on correct…
Descriptors: Influences, Models, Measurement Techniques, Reliability
Hoang V. Nguyen; Niels G. Waller – Educational and Psychological Measurement, 2024
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter…
Descriptors: Monte Carlo Methods, Item Response Theory, Correlation, Error of Measurement
Ting Dai; Yang Du; Jennifer Cromley; Tia Fechter; Frank Nelson – Journal of Experimental Education, 2024
Simple matrix sampling planned missing (SMS PD) design, introduce missing data patterns that lead to covariances between variables that are not jointly observed, and create difficulties for analyses other than mean and variance estimations. Based on prior research, we adopted a new multigroup confirmatory factor analysis (CFA) approach to handle…
Descriptors: Research Problems, Research Design, Data, Matrices
Christopher E. Shank – ProQuest LLC, 2024
This dissertation compares the performance of equivalence test (EQT) and null hypothesis test (NHT) procedures for identifying invariant and noninvariant factor loadings under a range of experimental manipulations. EQT is the statistically appropriate approach when the research goal is to find evidence of group similarity rather than group…
Descriptors: Factor Analysis, Goodness of Fit, Intervals, Comparative Analysis
Abdolvahab Khademi; Craig S. Wells; Maria Elena Oliveri; Ester Villalonga-Olives – SAGE Open, 2023
The most common effect size when using a multiple-group confirmatory factor analysis approach to measurement invariance is [delta]CFI and [delta]TLI with a cutoff value of 0.01. However, this recommended cutoff value may not be ubiquitously appropriate and may be of limited application for some tests (e.g., measures using dichotomous items or…
Descriptors: Factor Analysis, Factor Structure, Error of Measurement, Test Items
Klauth, Bo – ProQuest LLC, 2023
In conducting confirmatory factor analysis with ordered response items, the literature suggests that when the number of responses is five and item skewness (IS) is approximately normal, researchers can employ maximum likelihood with robust standard errors (MLR). However, MLR can yield biased factor loadings (FL) and FL standard errors (FLSE) when…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Error of Measurement
R. Noah Padgett – Practical Assessment, Research & Evaluation, 2023
The consistency of psychometric properties across waves of data collection provides valuable evidence that scores can be interpreted consistently. Evidence supporting the consistency of psychometric properties can come from using a longitudinal extension of item factor analysis to account for the lack of independence of observation when evaluating…
Descriptors: Psychometrics, Factor Analysis, Item Analysis, Validity
Tenko Raykov – Educational and Psychological Measurement, 2024
This note is concerned with the benefits that can result from the use of the maximal reliability and optimal linear combination concepts in educational and psychological research. Within the widely used framework of unidimensional multi-component measuring instruments, it is demonstrated that the linear combination of their components that…
Descriptors: Educational Research, Behavioral Science Research, Reliability, Error of Measurement