Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 12 |
Descriptor
Factor Analysis | 13 |
Psychometrics | 13 |
Simulation | 13 |
Evaluation Methods | 7 |
Correlation | 5 |
Item Response Theory | 4 |
Matrices | 4 |
Research Methodology | 4 |
Error of Measurement | 3 |
Evaluation Research | 3 |
Goodness of Fit | 3 |
More ▼ |
Source
Educational and Psychological… | 8 |
Applied Psychological… | 1 |
Intelligence | 1 |
International Educational… | 1 |
Psychometrika | 1 |
Structural Equation Modeling:… | 1 |
Author
Liu, Yan | 2 |
Zumbo, Bruno D. | 2 |
D'Urso, E. Damiano | 1 |
De Roover, Kim | 1 |
E. Damiano D'Urso | 1 |
Edwards, Michael C. | 1 |
Ferrando, Pere Joan | 1 |
Formann, Anton K. | 1 |
Jensen, Arthur R. | 1 |
Jeroen K. Vermunt | 1 |
Jesper Tijmstra | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Research | 8 |
Reports - Evaluative | 5 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Yan Xia; Xinchang Zhou – Educational and Psychological Measurement, 2025
Parallel analysis has been considered one of the most accurate methods for determining the number of factors in factor analysis. One major advantage of parallel analysis over traditional factor retention methods (e.g., Kaiser's rule) is that it addresses the sampling variability of eigenvalues obtained from the identity matrix, representing the…
Descriptors: Factor Analysis, Statistical Analysis, Evaluation Methods, Sampling
E. Damiano D'Urso; Jesper Tijmstra; Jeroen K. Vermunt; Kim De Roover – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance (MI) is required for validly comparing latent constructs measured by multiple ordinal self-report items. Non-invariances may occur when disregarding (group differences in) an acquiescence response style (ARS; an agreeing tendency regardless of item content). If non-invariance results solely from neglecting ARS, one should…
Descriptors: Error of Measurement, Structural Equation Models, Construct Validity, Measurement Techniques
D'Urso, E. Damiano; Tijmstra, Jesper; Vermunt, Jeroen K.; De Roover, Kim – Educational and Psychological Measurement, 2023
Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurements of individuals' latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these…
Descriptors: Factor Analysis, Measurement Techniques, Self Evaluation (Individuals), Psychological Patterns
Raborn, Anthony W.; Leite, Walter L.; Marcoulides, Katerina M. – International Educational Data Mining Society, 2019
Short forms of psychometric scales have been commonly used in educational and psychological research to reduce the burden of test administration. However, it is challenging to select items for a short form that preserve the validity and reliability of the scores of the original scale. This paper presents and evaluates multiple automated methods…
Descriptors: Psychometrics, Measures (Individuals), Mathematics, Heuristics
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2017
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Descriptors: Error of Measurement, Factor Analysis, Research Methodology, Psychometrics
Stanley, Leanne M.; Edwards, Michael C. – Educational and Psychological Measurement, 2016
The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…
Descriptors: Test Reliability, Goodness of Fit, Scores, Patients
Merkle, Edgar C.; Zeileis, Achim – Psychometrika, 2013
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
Descriptors: Factor Analysis, Evaluation Methods, Tests, Psychometrics
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is a lack of research on the effects of outliers on the decisions about the number of factors to retain in an exploratory factor analysis, especially for outliers arising from unintended and unknowingly included subpopulations. The purpose of the present research was to investigate how outliers from an unintended and unknowingly included…
Descriptors: Factor Analysis, Factor Structure, Evaluation Research, Evaluation Methods
Ferrando, Pere Joan – Applied Psychological Measurement, 2010
This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…
Descriptors: Factor Analysis, Statistics, Psychological Studies, Simulation
Tran, Ulrich S.; Formann, Anton K. – Educational and Psychological Measurement, 2009
Parallel analysis has been shown to be suitable for dimensionality assessment in factor analysis of continuous variables. There have also been attempts to demonstrate that it may be used to uncover the factorial structure of binary variables conforming to the unidimensional normal ogive model. This article provides both theoretical and empirical…
Descriptors: Simulation, Factor Analysis, Correlation, Evaluation Methods
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E. – Educational and Psychological Measurement, 2009
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
Descriptors: Simulation, Item Response Theory, Test Items, Factor Analysis
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2007
The impact of outliers on Cronbach's coefficient [alpha] has not been documented in the psychometric or statistical literature. This is an important gap because coefficient [alpha] is the most widely used measurement statistic in all of the social, educational, and health sciences. The impact of outliers on coefficient [alpha] is investigated for…
Descriptors: Psychometrics, Computation, Reliability, Monte Carlo Methods

Jensen, Arthur R.; Weng, Li-Jen – Intelligence, 1994
The stability of psychometric "g," the general factor of intelligence, is investigated in simulated correlation matrices and in typical empirical data from a large battery of mental tests. "G" is robust and almost invariant across methods of analysis. A reasonable strategy for estimating "g" is suggested. (SLD)
Descriptors: Correlation, Estimation (Mathematics), Factor Analysis, Intelligence