Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 22 |
Descriptor
Factor Analysis | 29 |
Monte Carlo Methods | 29 |
Simulation | 29 |
Error of Measurement | 10 |
Item Response Theory | 10 |
Sample Size | 10 |
Correlation | 9 |
Computation | 7 |
Evaluation Methods | 7 |
Statistical Analysis | 6 |
Structural Equation Models | 6 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 28 |
Reports - Research | 18 |
Reports - Evaluative | 10 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
High Schools | 1 |
Audience
Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Tugay Kaçak; Abdullah Faruk Kiliç – International Journal of Assessment Tools in Education, 2025
Researchers continue to choose PCA in scale development and adaptation studies because it is the default setting and overestimates measurement quality. When PCA is utilized in investigations, the explained variance and factor loadings can be exaggerated. PCA, in contrast to the models given in the literature, should be investigated in…
Descriptors: Factor Analysis, Monte Carlo Methods, Mathematical Models, Sample Size
Hoang V. Nguyen; Niels G. Waller – Educational and Psychological Measurement, 2024
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter…
Descriptors: Monte Carlo Methods, Item Response Theory, Correlation, Error of Measurement
Aidoo, Eric Nimako; Appiah, Simon K.; Boateng, Alexander – Journal of Experimental Education, 2021
This study investigated the small sample biasness of the ordered logit model parameters under multicollinearity using Monte Carlo simulation. The results showed that the level of biasness associated with the ordered logit model parameters consistently decreases for an increasing sample size while the distribution of the parameters becomes less…
Descriptors: Statistical Bias, Monte Carlo Methods, Simulation, Sample Size
Koyuncu, Ilhan; Kilic, Abdullah Faruk – International Journal of Assessment Tools in Education, 2021
In exploratory factor analysis, although the researchers decide which items belong to which factors by considering statistical results, the decisions taken sometimes can be subjective in case of having items with similar factor loadings and complex factor structures. The aim of this study was to examine the validity of classifying items into…
Descriptors: Classification, Graphs, Factor Analysis, Decision Making
Olivera-Aguilar, Margarita; Rikoon, Samuel H.; Gonzalez, Oscar; Kisbu-Sakarya, Yasemin; MacKinnon, David P. – Educational and Psychological Measurement, 2018
When testing a statistical mediation model, it is assumed that factorial measurement invariance holds for the mediating construct across levels of the independent variable X. The consequences of failing to address the violations of measurement invariance in mediation models are largely unknown. The purpose of the present study was to…
Descriptors: Error of Measurement, Statistical Analysis, Factor Analysis, Simulation
Fan, Yi; Lance, Charles E. – Educational and Psychological Measurement, 2017
The correlated trait-correlated method (CTCM) model for the analysis of multitrait-multimethod (MTMM) data is known to suffer convergence and admissibility (C&A) problems. We describe a little known and seldom applied reparameterized version of this model (CTCM-R) based on Rindskopf's reparameterization of the simpler confirmatory factor…
Descriptors: Multitrait Multimethod Techniques, Correlation, Goodness of Fit, Models
Dimitrov, Dimiter M. – Measurement and Evaluation in Counseling and Development, 2017
This article offers an approach to examining differential item functioning (DIF) under its item response theory (IRT) treatment in the framework of confirmatory factor analysis (CFA). The approach is based on integrating IRT- and CFA-based testing of DIF and using bias-corrected bootstrap confidence intervals with a syntax code in Mplus.
Descriptors: Test Bias, Item Response Theory, Factor Analysis, Evaluation Methods
Li, Ming; Harring, Jeffrey R. – Educational and Psychological Measurement, 2017
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
Descriptors: Simulation, Comparative Analysis, Monte Carlo Methods, Guidelines
Dardick, William R.; Mislevy, Robert J. – Educational and Psychological Measurement, 2016
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Descriptors: Bayesian Statistics, Probability, Data Analysis, Item Response Theory
Devlieger, Ines; Mayer, Axel; Rosseel, Yves – Educational and Psychological Measurement, 2016
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…
Descriptors: Regression (Statistics), Comparative Analysis, Structural Equation Models, Monte Carlo Methods
Liu, Yan; Zumbo, Bruno D.; Wu, Amery D. – Educational and Psychological Measurement, 2012
Previous studies have rarely examined the impact of outliers on the decisions about the number of factors to extract in an exploratory factor analysis. The few studies that have investigated this issue have arrived at contradictory conclusions regarding whether outliers inflated or deflated the number of factors extracted. By systematically…
Descriptors: Factor Analysis, Data, Simulation, Monte Carlo Methods
Yurdugul, Halil – Applied Psychological Measurement, 2009
This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…
Descriptors: Intervals, Monte Carlo Methods, Computer Software, Factor Analysis
Dinno, Alexis – Multivariate Behavioral Research, 2009
Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions…
Descriptors: Structural Equation Models, Item Response Theory, Computer Software, Surveys
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A. – Multivariate Behavioral Research, 2009
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
Descriptors: Sample Size, Factor Analysis, Enrollment, Evaluation Methods
Froelich, Amy G.; Habing, Brian – Applied Psychological Measurement, 2008
DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…
Descriptors: Test Items, Monte Carlo Methods, Form Classes (Languages), Program Effectiveness
Previous Page | Next Page »
Pages: 1 | 2