NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 79 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Pere J. Ferrando; Ana Hernández-Dorado; Urbano Lorenzo-Seva – Structural Equation Modeling: A Multidisciplinary Journal, 2024
A frequent criticism of exploratory factor analysis (EFA) is that it does not allow correlated residuals to be modelled, while they can be routinely specified in the confirmatory (CFA) model. In this article, we propose an EFA approach in which both the common factor solution and the residual matrix are unrestricted (i.e., the correlated residuals…
Descriptors: Correlation, Factor Analysis, Models, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
David Goretzko; Karik Siemund; Philipp Sterner – Educational and Psychological Measurement, 2024
Confirmatory factor analyses (CFA) are often used in psychological research when developing measurement models for psychological constructs. Evaluating CFA model fit can be quite challenging, as tests for exact model fit may focus on negligible deviances, while fit indices cannot be interpreted absolutely without specifying thresholds or cutoffs.…
Descriptors: Factor Analysis, Goodness of Fit, Psychological Studies, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Joakim Wallmark; James O. Ramsay; Juan Li; Marie Wiberg – Journal of Educational and Behavioral Statistics, 2024
Item response theory (IRT) models the relationship between the possible scores on a test item against a test taker's attainment of the latent trait that the item is intended to measure. In this study, we compare two models for tests with polytomously scored items: the optimal scoring (OS) model, a nonparametric IRT model based on the principles of…
Descriptors: Item Response Theory, Test Items, Models, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Ge, Yuan – Measurement: Interdisciplinary Research and Perspectives, 2023
In selected-response assessments such as attitude surveys with Likert-type rating scales, examinees often select from rating scale categories to reflect their locations on a construct. Researchers have observed that some examinees exhibit "response styles," which are systematic patterns of responses in which examinees are more likely to…
Descriptors: Goodness of Fit, Responses, Likert Scales, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Parkkinen, Veli-Pekka; Baumgartner, Michael – Sociological Methods & Research, 2023
In recent years, proponents of configurational comparative methods (CCMs) have advanced various dimensions of robustness as instrumental to model selection. But these robustness considerations have not led to computable robustness measures, and they have typically been applied to the analysis of real-life data with unknown underlying causal…
Descriptors: Robustness (Statistics), Comparative Analysis, Causal Models, Models
Bonifay, Wes – Grantee Submission, 2022
Traditional statistical model evaluation typically relies on goodness-of-fit testing and quantifying model complexity by counting parameters. Both of these practices may result in overfitting and have thereby contributed to the generalizability crisis. The information-theoretic principle of minimum description length addresses both of these…
Descriptors: Statistical Analysis, Models, Goodness of Fit, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jang, Yoona; Hong, Sehee – Educational and Psychological Measurement, 2023
The purpose of this study was to evaluate the degree of classification quality in the basic latent class model when covariates are either included or are not included in the model. To accomplish this task, Monte Carlo simulations were conducted in which the results of models with and without a covariate were compared. Based on these simulations,…
Descriptors: Classification, Models, Prediction, Sample Size
Peer reviewed Peer reviewed
W. Jake Thompson – Grantee Submission, 2024
Diagnostic classification models (DCMs) are psychometric models that can be used to estimate the presence or absence of psychological traits, or proficiency on fine-grained skills. Critical to the use of any psychometric model in practice, including DCMs, is an evaluation of model fit. Traditionally, DCMs have been estimated with maximum…
Descriptors: Bayesian Statistics, Classification, Psychometrics, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Reimers, Jennifer; Turner, Ronna C.; Tendeiro, Jorge N.; Lo, Wen-Juo; Keiffer, Elizabeth – Measurement: Interdisciplinary Research and Perspectives, 2023
Person-fit analyses are commonly used to detect aberrant responding in self-report data. Nonparametric person fit statistics do not require fitting a parametric test theory model and have performed well compared to other person-fit statistics. However, detection of aberrant responding has primarily focused on dominance response data, thus the…
Descriptors: Goodness of Fit, Nonparametric Statistics, Error of Measurement, Comparative Analysis
Ben Stenhaug; Ben Domingue – Grantee Submission, 2022
The fit of an item response model is typically conceptualized as whether a given model could have generated the data. We advocate for an alternative view of fit, "predictive fit", based on the model's ability to predict new data. We derive two predictive fit metrics for item response models that assess how well an estimated item response…
Descriptors: Goodness of Fit, Item Response Theory, Prediction, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eunsook; Ferron, John M.; Dedrick, Robert F.; Tan, Tony X.; Stark, Stephen – Educational and Psychological Measurement, 2021
Factor mixture modeling (FMM) has been increasingly used to investigate unobserved population heterogeneity. This study examined the issue of covariate effects with FMM in the context of measurement invariance testing. Specifically, the impact of excluding and misspecifying covariate effects on measurement invariance testing and class enumeration…
Descriptors: Role, Error of Measurement, Monte Carlo Methods, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ma, Wenchao; de la Torre, Jimmy – Educational Measurement: Issues and Practice, 2019
In this ITEMS module, we introduce the generalized deterministic inputs, noisy "and" gate (G-DINA) model, which is a general framework for specifying, estimating, and evaluating a wide variety of cognitive diagnosis models. The module contains a nontechnical introduction to diagnostic measurement, an introductory overview of the G-DINA…
Descriptors: Models, Classification, Measurement, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Lozano, José H.; Revuelta, Javier – Educational and Psychological Measurement, 2023
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and…
Descriptors: Bayesian Statistics, Learning Processes, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Radu Bogdan Toma – Journal of Early Adolescence, 2024
The Expectancy-Value model has been extensively used to understand students' achievement motivation. However, recent studies propose the inclusion of cost as a separate construct from values, leading to the development of the Expectancy-Value-Cost model. This study aimed to adapt Kosovich et al.'s ("The Journal of Early Adolescence", 35,…
Descriptors: Student Motivation, Student Attitudes, Academic Achievement, Mathematics Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shi, Yang; Schmucker, Robin; Chi, Min; Barnes, Tiffany; Price, Thomas – International Educational Data Mining Society, 2023
Knowledge components (KCs) have many applications. In computing education, knowing the demonstration of specific KCs has been challenging. This paper introduces an entirely data-driven approach for: (1) discovering KCs; and (2) demonstrating KCs, using students' actual code submissions. Our system is based on two expected properties of KCs: (1)…
Descriptors: Computer Science Education, Data Analysis, Programming, Coding
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6