NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tugay Kaçak; Abdullah Faruk Kiliç – International Journal of Assessment Tools in Education, 2025
Researchers continue to choose PCA in scale development and adaptation studies because it is the default setting and overestimates measurement quality. When PCA is utilized in investigations, the explained variance and factor loadings can be exaggerated. PCA, in contrast to the models given in the literature, should be investigated in…
Descriptors: Factor Analysis, Monte Carlo Methods, Mathematical Models, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Lingbo Tong; Wen Qu; Zhiyong Zhang – Grantee Submission, 2025
Factor analysis is widely utilized to identify latent factors underlying the observed variables. This paper presents a comprehensive comparative study of two widely used methods for determining the optimal number of factors in factor analysis, the K1 rule, and parallel analysis, along with a more recently developed method, the bass-ackward method.…
Descriptors: Factor Analysis, Monte Carlo Methods, Statistical Analysis, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
El Kadiri Boutchich, Driss – Journal of Education, 2019
The alternative human capital evaluation method proposed is implemented via two methods: principal component analysis and discriminant analysis. The first method classifies the research laboratory activities in homogeneous clusters, while the second method standardizes their scores. The proposed method is sitting in the activity-based management…
Descriptors: Human Capital, Evaluation Methods, Laboratories, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2017
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Descriptors: Error of Measurement, Factor Analysis, Research Methodology, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Ramlo, Sue – Mid-Western Educational Researcher, 2016
This manuscript's purpose is to introduce Q as a methodology before providing clarification about the preferred factor analytical choices of centroid and theoretical (hand) rotation. Stephenson, the creator of Q, designated that only these choices allowed for scientific exploration of subjectivity while not violating assumptions associated with…
Descriptors: Research Methodology, Q Methodology, Factor Analysis, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Hoelscher, Michael – Research in Comparative and International Education, 2017
This article argues that strong interrelations between methodological and theoretical advances exist. Progress in, especially comparative, methods may have important impacts on theory evaluation. By using the example of the "Varieties of Capitalism" approach and an international comparison of higher education systems, it can be shown…
Descriptors: Higher Education, Comparative Education, Research Methodology, Cross Cultural Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Vrieze, Scott I. – Psychological Methods, 2012
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…
Descriptors: Factor Analysis, Statistical Analysis, Psychology, Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is a lack of research on the effects of outliers on the decisions about the number of factors to retain in an exploratory factor analysis, especially for outliers arising from unintended and unknowingly included subpopulations. The purpose of the present research was to investigate how outliers from an unintended and unknowingly included…
Descriptors: Factor Analysis, Factor Structure, Evaluation Research, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Martin, Andrew J.; Yu, Kai; Papworth, Brad; Ginns, Paul; Collie, Rebecca J. – Journal of Psychoeducational Assessment, 2015
This study explored motivation and engagement among North American (the United States and Canada; n = 1,540), U.K. (n = 1,558), Australian (n = 2,283), and Chinese (n = 3,753) secondary school students. Motivation and engagement were assessed via students' responses to the Motivation and Engagement Scale-High School (MES-HS). Confirmatory factor…
Descriptors: Foreign Countries, Motivation, Learner Engagement, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Harraway, John; Broughton-Ansin, Freya; Deaker, Lynley; Jowett, Tim; Shephard, Kerry – Journal of Environmental Education, 2012
Higher education institutions are interested in the impact that they and concurrent life experiences may have on students' sustainability attitudes, but they lack formal processes to monitor changes. We used the NEP to monitor changes in students' ecological-worldviews. We were interested in what variation there would be in a multidisciplinary…
Descriptors: Student Attitudes, Research Methodology, Evaluation Methods, Conservation (Environment)
Peer reviewed Peer reviewed
Direct linkDirect link
Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo – Structural Equation Modeling: A Multidisciplinary Journal, 2011
Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…
Descriptors: Factor Analysis, Research Methodology, Statistics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L. – Multivariate Behavioral Research, 2011
A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…
Descriptors: Simulation, Research Methodology, Factor Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Jones-Farmer, L. Allison – Structural Equation Modeling: A Multidisciplinary Journal, 2010
When comparing latent variables among groups, it is important to first establish the equivalence or invariance of the measurement model across groups. Confirmatory factor analysis (CFA) is a commonly used methodological approach to examine measurement equivalence/invariance (ME/I). Within the CFA framework, the chi-square goodness-of-fit test and…
Descriptors: Factor Structure, Factor Analysis, Evaluation Research, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria – Psychological Methods, 2012
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…
Descriptors: Factor Analysis, Computation, Simulation, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E. – Educational and Psychological Measurement, 2009
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
Descriptors: Simulation, Item Response Theory, Test Items, Factor Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4