NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers3
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 89 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2022
In the context of Bayesian factor analysis, it is possible to compute plausible values, which might be used as covariates or predictors or to provide individual scores for the Bayesian latent variables. Previous simulation studies ascertained the validity of mean plausible values by the mean squared difference of the mean plausible values and the…
Descriptors: Bayesian Statistics, Factor Analysis, Prediction, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Julia-Kim Walther; Martin Hecht; Steffen Zitzmann – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Small sample sizes pose a severe threat to convergence and accuracy of between-group level parameter estimates in multilevel structural equation modeling (SEM). However, in certain situations, such as pilot studies or when populations are inherently small, increasing samples sizes is not feasible. As a remedy, we propose a two-stage regularized…
Descriptors: Sample Size, Hierarchical Linear Modeling, Structural Equation Models, Matrices
Peer reviewed Peer reviewed
Direct linkDirect link
Cao, Chunhua; Kim, Eun Sook; Chen, Yi-Hsin; Ferron, John – Educational and Psychological Measurement, 2021
This study examined the impact of omitting covariates interaction effect on parameter estimates in multilevel multiple-indicator multiple-cause models as well as the sensitivity of fit indices to model misspecification when the between-level, within-level, or cross-level interaction effect was left out in the models. The parameter estimates…
Descriptors: Goodness of Fit, Hierarchical Linear Modeling, Computation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Mostafa Hosseinzadeh; Ki Lynn Matlock Cole – Educational and Psychological Measurement, 2024
In real-world situations, multidimensional data may appear on large-scale tests or psychological surveys. The purpose of this study was to investigate the effects of the quantity and magnitude of cross-loadings and model specification on item parameter recovery in multidimensional Item Response Theory (MIRT) models, especially when the model was…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Algorithms
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jehanzeb Rashid Cheema – Journal of Education in Muslim Societies, 2024
This study explores the relationship between the Spiral Dynamics and the 3H (head, heart, hands) models of human growth and development, using constructs such as empathy, moral reasoning, forgiveness, and community mindedness that have been shown to have implications for education. The specific research question is, "Can a combination of…
Descriptors: Correlation, Factor Analysis, Computer Software, Moral Values
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; DiStefano, Christine; Calvocoressi, Lisa; Volker, Martin – Educational and Psychological Measurement, 2022
A class of effect size indices are discussed that evaluate the degree to which two nested confirmatory factor analysis models differ from each other in terms of fit to a set of observed variables. These descriptive effect measures can be used to quantify the impact of parameter restrictions imposed in an initially considered model and are free…
Descriptors: Effect Size, Models, Measurement Techniques, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fatih Orcan – International Journal of Assessment Tools in Education, 2023
Among all, Cronbach's Alpha and McDonald's Omega are commonly used for reliability estimations. The alpha uses inter-item correlations while omega is based on a factor analysis result. This study uses simulated ordinal data sets to test whether the alpha and omega produce different estimates. Their performances were compared according to the…
Descriptors: Statistical Analysis, Monte Carlo Methods, Correlation, Factor Analysis
Ben Stenhaug; Ben Domingue – Grantee Submission, 2022
The fit of an item response model is typically conceptualized as whether a given model could have generated the data. We advocate for an alternative view of fit, "predictive fit", based on the model's ability to predict new data. We derive two predictive fit metrics for item response models that assess how well an estimated item response…
Descriptors: Goodness of Fit, Item Response Theory, Prediction, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Yanyan; Strachan, Tyler; Ip, Edward H.; Willse, John T.; Chen, Shyh-Huei; Ackerman, Terry – International Journal of Testing, 2020
This research examined correlation estimates between latent abilities when using the two-dimensional and three-dimensional compensatory and noncompensatory item response theory models. Simulation study results showed that the recovery of the latent correlation was best when the test contained 100% of simple structure items for all models and…
Descriptors: Item Response Theory, Models, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Bakbergenuly, Ilyas; Hoaglin, David C.; Kulinskaya, Elena – Research Synthesis Methods, 2020
In random-effects meta-analysis the between-study variance ([tau][superscript 2]) has a key role in assessing heterogeneity of study-level estimates and combining them to estimate an overall effect. For odds ratios the most common methods suffer from bias in estimating [tau][superscript 2] and the overall effect and produce confidence intervals…
Descriptors: Meta Analysis, Statistical Bias, Intervals, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Su-Pin; Huang, Hung-Yu – Journal of Educational and Behavioral Statistics, 2022
To address response style or bias in rating scales, forced-choice items are often used to request that respondents rank their attitudes or preferences among a limited set of options. The rating scales used by raters to render judgments on ratees' performance also contribute to rater bias or errors; consequently, forced-choice items have recently…
Descriptors: Evaluation Methods, Rating Scales, Item Analysis, Preferences
Peer reviewed Peer reviewed
Direct linkDirect link
Fangxing Bai; Ben Kelcey – Society for Research on Educational Effectiveness, 2024
Purpose and Background: Despite the flexibility of multilevel structural equation modeling (MLSEM), a practical limitation many researchers encounter is how to effectively estimate model parameters with typical sample sizes when there are many levels of (potentially disparate) nesting. We develop a method-of-moment corrected maximum likelihood…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Sample Size, Faculty Development
Peer reviewed Peer reviewed
Direct linkDirect link
Zhou, Sherry; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2020
The semi-generalized partial credit model (Semi-GPCM) has been proposed as a unidimensional modeling method for handling not applicable scale responses and neutral scale responses, and it has been suggested that the model may be of use in handling missing data in scale items. The purpose of this study is to evaluate the ability of the…
Descriptors: Models, Statistical Analysis, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Xiaolin; Svetina, Dubravka; Dai, Shenghai – Journal of Experimental Education, 2019
Recently, interest in test subscore reporting for diagnosis purposes has been growing rapidly. The two simulation studies here examined factors (sample size, number of subscales, correlation between subscales, and three factors affecting subscore reliability: number of items per subscale, item parameter distribution, and data generating model)…
Descriptors: Value Added Models, Scores, Sample Size, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael – Applied Developmental Science, 2017
Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…
Descriptors: Factor Analysis, Structural Equation Models, Correlation, Sample Size
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6