NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers6
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 231 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dexin Shi; Bo Zhang; Ren Liu; Zhehan Jiang – Educational and Psychological Measurement, 2024
Multiple imputation (MI) is one of the recommended techniques for handling missing data in ordinal factor analysis models. However, methods for computing MI-based fit indices under ordinal factor analysis models have yet to be developed. In this short note, we introduced the methods of using the standardized root mean squared residual (SRMR) and…
Descriptors: Goodness of Fit, Factor Analysis, Simulation, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
William C. M. Belzak; Daniel J. Bauer – Journal of Educational and Behavioral Statistics, 2024
Testing for differential item functioning (DIF) has undergone rapid statistical developments recently. Moderated nonlinear factor analysis (MNLFA) allows for simultaneous testing of DIF among multiple categorical and continuous covariates (e.g., sex, age, ethnicity, etc.), and regularization has shown promising results for identifying DIF among…
Descriptors: Test Bias, Algorithms, Factor Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Timothy R. Konold; Elizabeth A. Sanders; Kelvin Afolabi – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Measurement invariance (MI) is an essential part of validity evidence concerned with ensuring that tests function similarly across groups, contexts, and time. Most evaluations of MI involve multigroup confirmatory factor analyses (MGCFA) that assume simple structure. However, recent research has shown that constraining non-target indicators to…
Descriptors: Evaluation Methods, Error of Measurement, Validity, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Chunhua Cao; Xinya Liang – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Cross-loadings are common in multiple-factor confirmatory factor analysis (CFA) but often ignored in measurement invariance testing. This study examined the impact of ignoring cross-loadings on the sensitivity of fit measures (CFI, RMSEA, SRMR, SRMRu, AIC, BIC, SaBIC, LRT) to measurement noninvariance. The manipulated design factors included the…
Descriptors: Goodness of Fit, Error of Measurement, Sample Size, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Tenko Raykov – Structural Equation Modeling: A Multidisciplinary Journal, 2024
This note demonstrates that measurement invariance does not guarantee meaningful and valid group comparisons in multiple-population settings. The article follows on a recent critical discussion by Robitzsch and Lüdtke, who argued that measurement invariance was not a pre-requisite for such comparisons. Within the framework of common factor…
Descriptors: Error of Measurement, Prerequisites, Factor Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
David Goretzko; Karik Siemund; Philipp Sterner – Educational and Psychological Measurement, 2024
Confirmatory factor analyses (CFA) are often used in psychological research when developing measurement models for psychological constructs. Evaluating CFA model fit can be quite challenging, as tests for exact model fit may focus on negligible deviances, while fit indices cannot be interpreted absolutely without specifying thresholds or cutoffs.…
Descriptors: Factor Analysis, Goodness of Fit, Psychological Studies, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Stephanie M. Bell; R. Philip Chalmers; David B. Flora – Educational and Psychological Measurement, 2024
Coefficient omega indices are model-based composite reliability estimates that have become increasingly popular. A coefficient omega index estimates how reliably an observed composite score measures a target construct as represented by a factor in a factor-analysis model; as such, the accuracy of omega estimates is likely to depend on correct…
Descriptors: Influences, Models, Measurement Techniques, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Hoang V. Nguyen; Niels G. Waller – Educational and Psychological Measurement, 2024
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter…
Descriptors: Monte Carlo Methods, Item Response Theory, Correlation, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Dai; Yang Du; Jennifer Cromley; Tia Fechter; Frank Nelson – Journal of Experimental Education, 2024
Simple matrix sampling planned missing (SMS PD) design, introduce missing data patterns that lead to covariances between variables that are not jointly observed, and create difficulties for analyses other than mean and variance estimations. Based on prior research, we adopted a new multigroup confirmatory factor analysis (CFA) approach to handle…
Descriptors: Research Problems, Research Design, Data, Matrices
Peer reviewed Peer reviewed
Direct linkDirect link
Abdolvahab Khademi; Craig S. Wells; Maria Elena Oliveri; Ester Villalonga-Olives – SAGE Open, 2023
The most common effect size when using a multiple-group confirmatory factor analysis approach to measurement invariance is [delta]CFI and [delta]TLI with a cutoff value of 0.01. However, this recommended cutoff value may not be ubiquitously appropriate and may be of limited application for some tests (e.g., measures using dichotomous items or…
Descriptors: Factor Analysis, Factor Structure, Error of Measurement, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
R. Noah Padgett – Practical Assessment, Research & Evaluation, 2023
The consistency of psychometric properties across waves of data collection provides valuable evidence that scores can be interpreted consistently. Evidence supporting the consistency of psychometric properties can come from using a longitudinal extension of item factor analysis to account for the lack of independence of observation when evaluating…
Descriptors: Psychometrics, Factor Analysis, Item Analysis, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Yan Xia; Selim Havan – Educational and Psychological Measurement, 2024
Although parallel analysis has been found to be an accurate method for determining the number of factors in many conditions with complete data, its application under missing data is limited. The existing literature recommends that, after using an appropriate multiple imputation method, researchers either apply parallel analysis to every imputed…
Descriptors: Data Interpretation, Factor Analysis, Statistical Inference, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Ashley L. Watts; Ashley L. Greene; Wes Bonifay; Eiko L. Fried – Grantee Submission, 2023
The p-factor is a construct that is thought to explain and maybe even cause variation in all forms of psychopathology. Since its 'discovery' in 2012, hundreds of studies have been dedicated to the extraction and validation of statistical instantiations of the p-factor, called general factors of psychopathology. In this Perspective, we outline five…
Descriptors: Causal Models, Psychopathology, Goodness of Fit, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Hyunjung Lee; Heining Cham – Educational and Psychological Measurement, 2024
Determining the number of factors in exploratory factor analysis (EFA) is crucial because it affects the rest of the analysis and the conclusions of the study. Researchers have developed various methods for deciding the number of factors to retain in EFA, but this remains one of the most difficult decisions in the EFA. The purpose of this study is…
Descriptors: Factor Structure, Factor Analysis, Monte Carlo Methods, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Esra Sözer Boz – Education and Information Technologies, 2025
International large-scale assessments provide cross-national data on students' cognitive and non-cognitive characteristics. A critical methodological issue that often arises in comparing data from cross-national studies is ensuring measurement invariance, indicating that the construct under investigation is the same across the compared groups.…
Descriptors: Achievement Tests, International Assessment, Foreign Countries, Secondary School Students
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  16