NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…54
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 54 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Gonzalez, Oscar – Educational and Psychological Measurement, 2023
When scores are used to make decisions about respondents, it is of interest to estimate classification accuracy (CA), the probability of making a correct decision, and classification consistency (CC), the probability of making the same decision across two parallel administrations of the measure. Model-based estimates of CA and CC computed from the…
Descriptors: Classification, Accuracy, Intervals, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Gold, Andreas; Krampen, Dorothea – Educational and Psychological Measurement, 2023
In modeling missing data, the missing data latent variable of the confirmatory factor model accounts for systematic variation associated with missing data so that replacement of what is missing is not required. This study aimed at extending the modeling missing data approach to tetrachoric correlations as input and at exploring the consequences of…
Descriptors: Data, Models, Factor Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Karl Schweizer; Andreas Gold; Dorothea Krampen; Stefan Troche – Educational and Psychological Measurement, 2024
Conceptualizing two-variable disturbances preventing good model fit in confirmatory factor analysis as item-level method effects instead of correlated residuals avoids violating the principle that residual variation is unique for each item. The possibility of representing such a disturbance by a method factor of a bifactor measurement model was…
Descriptors: Correlation, Factor Analysis, Measurement Techniques, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Xiaoling; Cao, Pei; Lai, Xinzhen; Wen, Jianbing; Yang, Yanyun – Educational and Psychological Measurement, 2023
Percentage of uncontaminated correlations (PUC), explained common variance (ECV), and omega hierarchical ([omega]H) have been used to assess the degree to which a scale is essentially unidimensional and to predict structural coefficient bias when a unidimensional measurement model is fit to multidimensional data. The usefulness of these indices…
Descriptors: Correlation, Measurement Techniques, Prediction, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Tenko Raykov; Christine DiStefano; Lisa Calvocoressi – Educational and Psychological Measurement, 2024
This note demonstrates that the widely used Bayesian Information Criterion (BIC) need not be generally viewed as a routinely dependable index for model selection when the bifactor and second-order factor models are examined as rival means for data description and explanation. To this end, we use an empirically relevant setting with…
Descriptors: Bayesian Statistics, Models, Decision Making, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Anthony, James C.; Menold, Natalja – Educational and Psychological Measurement, 2023
The population relationship between coefficient alpha and scale reliability is studied in the widely used setting of unidimensional multicomponent measuring instruments. It is demonstrated that for any set of component loadings on the common factor, regardless of the extent of their inequality, the discrepancy between alpha and reliability can be…
Descriptors: Correlation, Evaluation Research, Reliability, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
David Goretzko; Karik Siemund; Philipp Sterner – Educational and Psychological Measurement, 2024
Confirmatory factor analyses (CFA) are often used in psychological research when developing measurement models for psychological constructs. Evaluating CFA model fit can be quite challenging, as tests for exact model fit may focus on negligible deviances, while fit indices cannot be interpreted absolutely without specifying thresholds or cutoffs.…
Descriptors: Factor Analysis, Goodness of Fit, Psychological Studies, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2021
Methods for optimal factor rotation of two-facet loading matrices have recently been proposed. However, the problem of the correct number of factors to retain for rotation of two-facet loading matrices has rarely been addressed in the context of exploratory factor analysis. Most previous studies were based on the observation that two-facet loading…
Descriptors: Factor Analysis, Statistical Analysis, Correlation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Stephanie M. Bell; R. Philip Chalmers; David B. Flora – Educational and Psychological Measurement, 2024
Coefficient omega indices are model-based composite reliability estimates that have become increasingly popular. A coefficient omega index estimates how reliably an observed composite score measures a target construct as represented by a factor in a factor-analysis model; as such, the accuracy of omega estimates is likely to depend on correct…
Descriptors: Influences, Models, Measurement Techniques, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Hoang V. Nguyen; Niels G. Waller – Educational and Psychological Measurement, 2024
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter…
Descriptors: Monte Carlo Methods, Item Response Theory, Correlation, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Wenjing; Choi, Youn-Jeng – Educational and Psychological Measurement, 2023
Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; DiStefano, Christine; Calvocoressi, Lisa; Volker, Martin – Educational and Psychological Measurement, 2022
A class of effect size indices are discussed that evaluate the degree to which two nested confirmatory factor analysis models differ from each other in terms of fit to a set of observed variables. These descriptive effect measures can be used to quantify the impact of parameter restrictions imposed in an initially considered model and are free…
Descriptors: Effect Size, Models, Measurement Techniques, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J.; Navarro-González, David – Educational and Psychological Measurement, 2021
Item response theory "dual" models (DMs) in which both items and individuals are viewed as sources of differential measurement error so far have been proposed only for unidimensional measures. This article proposes two multidimensional extensions of existing DMs: the M-DTCRM (dual Thurstonian continuous response model), intended for…
Descriptors: Item Response Theory, Error of Measurement, Models, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Bitna; Sohn, Wonsook – Educational and Psychological Measurement, 2022
A Monte Carlo study was conducted to compare the performance of a level-specific (LS) fit evaluation with that of a simultaneous (SI) fit evaluation in multilevel confirmatory factor analysis (MCFA) models. We extended previous studies by examining their performance under MCFA models with different factor structures across levels. In addition,…
Descriptors: Goodness of Fit, Factor Structure, Monte Carlo Methods, Factor Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4