Publication Date
In 2025 | 4 |
Since 2024 | 7 |
Since 2021 (last 5 years) | 13 |
Since 2016 (last 10 years) | 19 |
Since 2006 (last 20 years) | 25 |
Descriptor
Error of Measurement | 28 |
Evaluation Methods | 28 |
Factor Analysis | 28 |
Monte Carlo Methods | 10 |
Correlation | 9 |
Sample Size | 9 |
Structural Equation Models | 9 |
Simulation | 7 |
Comparative Analysis | 6 |
Goodness of Fit | 6 |
Item Analysis | 6 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 25 |
Reports - Research | 20 |
Reports - Descriptive | 3 |
Reports - Evaluative | 3 |
Dissertations/Theses -… | 2 |
Information Analyses | 1 |
Opinion Papers | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Secondary Education | 3 |
Adult Education | 2 |
Elementary Secondary Education | 1 |
Grade 8 | 1 |
High Schools | 1 |
Higher Education | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
Trends in International… | 2 |
Progress in International… | 1 |
What Works Clearinghouse Rating
Timothy R. Konold; Elizabeth A. Sanders; Kelvin Afolabi – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Measurement invariance (MI) is an essential part of validity evidence concerned with ensuring that tests function similarly across groups, contexts, and time. Most evaluations of MI involve multigroup confirmatory factor analyses (MGCFA) that assume simple structure. However, recent research has shown that constraining non-target indicators to…
Descriptors: Evaluation Methods, Error of Measurement, Validity, Monte Carlo Methods
Tenko Raykov – Structural Equation Modeling: A Multidisciplinary Journal, 2024
This note demonstrates that measurement invariance does not guarantee meaningful and valid group comparisons in multiple-population settings. The article follows on a recent critical discussion by Robitzsch and Lüdtke, who argued that measurement invariance was not a pre-requisite for such comparisons. Within the framework of common factor…
Descriptors: Error of Measurement, Prerequisites, Factor Analysis, Evaluation Methods
David Goretzko; Karik Siemund; Philipp Sterner – Educational and Psychological Measurement, 2024
Confirmatory factor analyses (CFA) are often used in psychological research when developing measurement models for psychological constructs. Evaluating CFA model fit can be quite challenging, as tests for exact model fit may focus on negligible deviances, while fit indices cannot be interpreted absolutely without specifying thresholds or cutoffs.…
Descriptors: Factor Analysis, Goodness of Fit, Psychological Studies, Measurement
Klauth, Bo – ProQuest LLC, 2023
In conducting confirmatory factor analysis with ordered response items, the literature suggests that when the number of responses is five and item skewness (IS) is approximately normal, researchers can employ maximum likelihood with robust standard errors (MLR). However, MLR can yield biased factor loadings (FL) and FL standard errors (FLSE) when…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Error of Measurement
Manuel T. Rein; Jeroen K. Vermunt; Kim De Roover; Leonie V. D. E. Vogelsmeier – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Researchers often study dynamic processes of latent variables in everyday life, such as the interplay of positive and negative affect over time. An intuitive approach is to first estimate the measurement model of the latent variables, then compute factor scores, and finally use these factor scores as observed scores in vector autoregressive…
Descriptors: Measurement Techniques, Factor Analysis, Scores, Validity
Yuanfang Liu; Mark H. C. Lai; Ben Kelcey – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance holds when a latent construct is measured in the same way across different levels of background variables (continuous or categorical) while controlling for the true value of that construct. Using Monte Carlo simulation, this paper compares the multiple indicators, multiple causes (MIMIC) model and MIMIC-interaction to a…
Descriptors: Classification, Accuracy, Error of Measurement, Correlation
Pere J. Ferrando; David Navarro-González; Fabia Morales-Vives – Educational and Psychological Measurement, 2025
The problem of local item dependencies (LIDs) is very common in personality and attitude measures, particularly in those that measure narrow-bandwidth dimensions. At the structural level, these dependencies can be modeled by using extended factor analytic (FA) solutions that include correlated residuals. However, the effects that LIDs have on the…
Descriptors: Scores, Accuracy, Evaluation Methods, Factor Analysis
Maritza Casas; Stephen G. Sireci – International Journal of Testing, 2025
In this study, we take a critical look at the degree to which the measurement of bullying and sense of belonging at school is invariant across groups of students defined by immigrant status. Our study focuses on the invariance of these constructs as measured on a recent PISA administration and includes a discussion of two statistical methods for…
Descriptors: Error of Measurement, Immigrants, Peer Groups, Bullying
Guler, Gul; Cikrikci, Rahime Nukhet – International Journal of Assessment Tools in Education, 2022
The purpose of this study was to investigate the Type I Error findings and power rates of the methods used to determine dimensionality in unidimensional and bidimensional psychological constructs for various conditions (characteristic of the distribution, sample size, length of the test, and interdimensional correlation) and to examine the joint…
Descriptors: Comparative Analysis, Error of Measurement, Decision Making, Factor Analysis
Jobst, Lisa J.; Auerswald, Max; Moshagen, Morten – Educational and Psychological Measurement, 2022
Prior studies investigating the effects of non-normality in structural equation modeling typically induced non-normality in the indicator variables. This procedure neglects the factor analytic structure of the data, which is defined as the sum of latent variables and errors, so it is unclear whether previous results hold if the source of…
Descriptors: Goodness of Fit, Structural Equation Models, Error of Measurement, Factor Analysis
Montoya, Amanda K.; Edwards, Michael C. – Educational and Psychological Measurement, 2021
Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the…
Descriptors: Goodness of Fit, Factor Analysis, Cutting Scores, Correlation
Wang, Ze – Large-scale Assessments in Education, 2022
In educational and psychological research, it is common to use latent factors to represent constructs and then to examine covariate effects on these latent factors. Using empirical data, this study applied three approaches to covariate effects on latent factors: the multiple-indicator multiple-cause (MIMIC) approach, multiple group confirmatory…
Descriptors: Comparative Analysis, Evaluation Methods, Grade 8, Mathematics Achievement
Greifer, Noah – ProQuest LLC, 2018
There has been some research in the use of propensity scores in the context of measurement error in the confounding variables; one recommended method is to generate estimates of the mis-measured covariate using a latent variable model, and to use those estimates (i.e., factor scores) in place of the covariate. I describe a simulation study…
Descriptors: Evaluation Methods, Probability, Scores, Statistical Analysis
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2017
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Descriptors: Error of Measurement, Factor Analysis, Research Methodology, Psychometrics
Previous Page | Next Page »
Pages: 1 | 2