NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Timothy R. Konold; Elizabeth A. Sanders; Kelvin Afolabi – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Measurement invariance (MI) is an essential part of validity evidence concerned with ensuring that tests function similarly across groups, contexts, and time. Most evaluations of MI involve multigroup confirmatory factor analyses (MGCFA) that assume simple structure. However, recent research has shown that constraining non-target indicators to…
Descriptors: Evaluation Methods, Error of Measurement, Validity, Monte Carlo Methods
Ayse Busra Ceviren – ProQuest LLC, 2024
Latent change score (LCS) models are a powerful class of structural equation modeling that allows researchers to work with latent difference scores that minimize measurement error. LCS models define change as a function of prior status, which makes it well-suited for modeling developmental theories or processes. In LCS models, like other latent…
Descriptors: Structural Equation Models, Error of Measurement, Statistical Bias, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Sanghyun Hong; W. Robert Reed – Research Synthesis Methods, 2024
This study builds on the simulation framework of a recent paper by Stanley and Doucouliagos ("Research Synthesis Methods" 2023;14;515--519). S&D use simulations to make the argument that meta-analyses using partial correlation coefficients (PCCs) should employ a "suboptimal" estimator of the PCC standard error when…
Descriptors: Meta Analysis, Correlation, Weighted Scores, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Shunji Wang; Katerina M. Marcoulides; Jiashan Tang; Ke-Hai Yuan – Structural Equation Modeling: A Multidisciplinary Journal, 2024
A necessary step in applying bi-factor models is to evaluate the need for domain factors with a general factor in place. The conventional null hypothesis testing (NHT) was commonly used for such a purpose. However, the conventional NHT meets challenges when the domain loadings are weak or the sample size is insufficient. This article proposes…
Descriptors: Hypothesis Testing, Error of Measurement, Comparative Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Hoang V. Nguyen; Niels G. Waller – Educational and Psychological Measurement, 2024
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter…
Descriptors: Monte Carlo Methods, Item Response Theory, Correlation, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Phillip K. Wood – Structural Equation Modeling: A Multidisciplinary Journal, 2024
The logistic and confined exponential curves are frequently used in studies of growth and learning. These models, which are nonlinear in their parameters, can be estimated using structural equation modeling software. This paper proposes a single combined model, a weighted combination of both models. Mplus, Proc Calis, and lavaan code for the model…
Descriptors: Structural Equation Models, Computation, Computer Software, Weighted Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Hyunjung Lee; Heining Cham – Educational and Psychological Measurement, 2024
Determining the number of factors in exploratory factor analysis (EFA) is crucial because it affects the rest of the analysis and the conclusions of the study. Researchers have developed various methods for deciding the number of factors to retain in EFA, but this remains one of the most difficult decisions in the EFA. The purpose of this study is…
Descriptors: Factor Structure, Factor Analysis, Monte Carlo Methods, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Sean Joo; Montserrat Valdivia; Dubravka Svetina Valdivia; Leslie Rutkowski – Journal of Educational and Behavioral Statistics, 2024
Evaluating scale comparability in international large-scale assessments depends on measurement invariance (MI). The root mean square deviation (RMSD) is a standard method for establishing MI in several programs, such as the Programme for International Student Assessment and the Programme for the International Assessment of Adult Competencies.…
Descriptors: International Assessment, Monte Carlo Methods, Statistical Studies, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Bang Quan Zheng; Peter M. Bentler – Structural Equation Modeling: A Multidisciplinary Journal, 2025
This paper aims to advocate for a balanced approach to model fit evaluation in structural equation modeling (SEM). The ongoing debate surrounding chi-square test statistics and fit indices has been characterized by ambiguity and controversy. Despite the acknowledged limitations of relying solely on the chi-square test, its careful application can…
Descriptors: Monte Carlo Methods, Structural Equation Models, Goodness of Fit, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fatih Orçan – International Journal of Assessment Tools in Education, 2025
Factor analysis is a statistical method to explore the relationships among observed variables and identify latent structures. It is crucial in scale development and validity analysis. Key factors affecting the accuracy of factor analysis results include the type of data, sample size, and the number of response categories. While some studies…
Descriptors: Factor Analysis, Factor Structure, Item Response Theory, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Yuanfang Liu; Mark H. C. Lai; Ben Kelcey – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance holds when a latent construct is measured in the same way across different levels of background variables (continuous or categorical) while controlling for the true value of that construct. Using Monte Carlo simulation, this paper compares the multiple indicators, multiple causes (MIMIC) model and MIMIC-interaction to a…
Descriptors: Classification, Accuracy, Error of Measurement, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Ke-Hai Yuan; Zhiyong Zhang – Grantee Submission, 2024
Data in social and behavioral sciences typically contain measurement errors and also do not have predefined metrics. Structural equation modeling (SEM) is commonly used to analyze such data. This article discuss issues in latent-variable modeling as compared to regression analysis with composite-scores. Via logical reasoning and analytical results…
Descriptors: Error of Measurement, Measurement Techniques, Social Science Research, Behavioral Science Research
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2024
Longitudinal models of individual growth typically emphasize between-person predictors of change but ignore how growth may vary "within" persons because each person contributes only one point at each time to the model. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Applied Measurement in Education, 2024
Longitudinal models typically emphasize between-person predictors of change but ignore how growth varies "within" persons because each person contributes only one data point at each time. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift over time. While traditionally…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development