NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 68 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xiaohui Luo; Yueqin Hu – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Intensive longitudinal data has been widely used to examine reciprocal or causal relations between variables. However, these variables may not be temporally aligned. This study examined the consequences and solutions of the problem of temporal misalignment in intensive longitudinal data based on dynamic structural equation models. First the impact…
Descriptors: Structural Equation Models, Longitudinal Studies, Data Analysis, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Emma Somer; Carl Falk; Milica Miocevic – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Factor Score Regression (FSR) is increasingly employed as an alternative to structural equation modeling (SEM) in small samples. Despite its popularity in psychology, the performance of FSR in multigroup models with small samples remains relatively unknown. The goal of this study was to examine the performance of FSR, namely Croon's correction and…
Descriptors: Scores, Structural Equation Models, Comparative Analysis, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tugay Kaçak; Abdullah Faruk Kiliç – International Journal of Assessment Tools in Education, 2025
Researchers continue to choose PCA in scale development and adaptation studies because it is the default setting and overestimates measurement quality. When PCA is utilized in investigations, the explained variance and factor loadings can be exaggerated. PCA, in contrast to the models given in the literature, should be investigated in…
Descriptors: Factor Analysis, Monte Carlo Methods, Mathematical Models, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Russell P. Houpt; Kevin J. Grimm; Aaron T. McLaughlin; Daryl R. Van Tongeren – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Numerous methods exist to determine the optimal number of classes when using latent profile analysis (LPA), but none are consistently correct. Recently, the likelihood incremental percentage per parameter (LI3P) was proposed as a model effect-size measure. To evaluate the LI3P more thoroughly, we simulated 50,000 datasets, manipulating factors…
Descriptors: Structural Equation Models, Profiles, Sample Size, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Ismail Cuhadar; Ömür Kaya Kalkan – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Simulation studies are needed to investigate how many score categories are sufficient to treat ordered categorical data as continuous, particularly for bifactor models. The current simulation study aims to address such needs by investigating the performance of estimation methods in the bifactor models with ordered categorical data. Results support…
Descriptors: Predictor Variables, Structural Equation Models, Sample Size, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Suppanut Sriutaisuk; Yu Liu; Seungwon Chung; Hanjoe Kim; Fei Gu – Educational and Psychological Measurement, 2025
The multiple imputation two-stage (MI2S) approach holds promise for evaluating the model fit of structural equation models for ordinal variables with multiply imputed data. However, previous studies only examined the performance of MI2S-based residual-based test statistics. This study extends previous research by examining the performance of two…
Descriptors: Structural Equation Models, Error of Measurement, Programming Languages, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
David Goretzko; Karik Siemund; Philipp Sterner – Educational and Psychological Measurement, 2024
Confirmatory factor analyses (CFA) are often used in psychological research when developing measurement models for psychological constructs. Evaluating CFA model fit can be quite challenging, as tests for exact model fit may focus on negligible deviances, while fit indices cannot be interpreted absolutely without specifying thresholds or cutoffs.…
Descriptors: Factor Analysis, Goodness of Fit, Psychological Studies, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Jang, Yoona; Hong, Sehee – Educational and Psychological Measurement, 2023
The purpose of this study was to evaluate the degree of classification quality in the basic latent class model when covariates are either included or are not included in the model. To accomplish this task, Monte Carlo simulations were conducted in which the results of models with and without a covariate were compared. Based on these simulations,…
Descriptors: Classification, Models, Prediction, Sample Size
Du, Han; Enders, Craig; Keller, Brian; Bradbury, Thomas N.; Karney, Benjamin R. – Grantee Submission, 2022
Missing data are exceedingly common across a variety of disciplines, such as educational, social, and behavioral science areas. Missing not at random (MNAR) mechanism where missingness is related to unobserved data is widespread in real data and has detrimental consequence. However, the existing MNAR-based methods have potential problems such as…
Descriptors: Bayesian Statistics, Data Analysis, Computer Simulation, Sample Size
Ben Stenhaug; Ben Domingue – Grantee Submission, 2022
The fit of an item response model is typically conceptualized as whether a given model could have generated the data. We advocate for an alternative view of fit, "predictive fit", based on the model's ability to predict new data. We derive two predictive fit metrics for item response models that assess how well an estimated item response…
Descriptors: Goodness of Fit, Item Response Theory, Prediction, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Na Shan; Ping-Feng Xu – Journal of Educational and Behavioral Statistics, 2025
The detection of differential item functioning (DIF) is important in psychological and behavioral sciences. Standard DIF detection methods perform an item-by-item test iteratively, often assuming that all items except the one under investigation are DIF-free. This article proposes a Bayesian adaptive Lasso method to detect DIF in graded response…
Descriptors: Bayesian Statistics, Item Response Theory, Adolescents, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Su-Pin; Huang, Hung-Yu – Journal of Educational and Behavioral Statistics, 2022
To address response style or bias in rating scales, forced-choice items are often used to request that respondents rank their attitudes or preferences among a limited set of options. The rating scales used by raters to render judgments on ratees' performance also contribute to rater bias or errors; consequently, forced-choice items have recently…
Descriptors: Evaluation Methods, Rating Scales, Item Analysis, Preferences
Chun Wang; Ruoyi Zhu; Gongjun Xu – Grantee Submission, 2022
Differential item functioning (DIF) analysis refers to procedures that evaluate whether an item's characteristic differs for different groups of persons after controlling for overall differences in performance. DIF is routinely evaluated as a screening step to ensure items behavior the same across groups. Currently, the majority DIF studies focus…
Descriptors: Models, Item Response Theory, Item Analysis, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ben Kelcey; Fangxing Bai; Amota Ataneka; Yanli Xie; Kyle Cox – Society for Research on Educational Effectiveness, 2024
We develop a structural after measurement (SAM) method for structural equation models (SEMs) that accommodates missing data. The results show that the proposed SAM missing data estimator outperforms conventional full information (FI) estimators in terms of convergence, bias, and root-mean-square-error in small-to-moderate samples or large samples…
Descriptors: Structural Equation Models, Research Problems, Error of Measurement, Maximum Likelihood Statistics
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5