NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 241 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hans-Peter Piepho; Johannes Forkman; Waqas Ahmed Malik – Research Synthesis Methods, 2024
Checking for possible inconsistency between direct and indirect evidence is an important task in network meta-analysis. Recently, an evidence-splitting (ES) model has been proposed, that allows separating direct and indirect evidence in a network and hence assessing inconsistency. A salient feature of this model is that the variance for…
Descriptors: Maximum Likelihood Statistics, Evidence, Networks, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Doran, Harold – Journal of Educational and Behavioral Statistics, 2023
This article is concerned with a subset of numerically stable and scalable algorithms useful to support computationally complex psychometric models in the era of machine learning and massive data. The subset selected here is a core set of numerical methods that should be familiar to computational psychometricians and considers whitening transforms…
Descriptors: Scaling, Algorithms, Psychometrics, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D.; Jaffari, Fathima – Measurement and Evaluation in Counseling and Development, 2022
The utility of the maximum likelihood F-test was demonstrated as an alternative to the omnibus Chi-square test when evaluating model fit in confirmatory factor analysis with small samples, as it has been well documented that the likelihood ratio test (T[subscript ML]) with small samples is not Chi-square distributed.
Descriptors: Maximum Likelihood Statistics, Factor Analysis, Alternative Assessment, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Viechtbauer, Wolfgang; López-López, José Antonio – Research Synthesis Methods, 2022
Heterogeneity is commonplace in meta-analysis. When heterogeneity is found, researchers often aim to identify predictors that account for at least part of such heterogeneity by using mixed-effects meta-regression models. Another potentially relevant goal is to focus on the amount of heterogeneity as a function of one or more predictors, but this…
Descriptors: Meta Analysis, Models, Predictor Variables, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Mostafa Hosseinzadeh; Ki Lynn Matlock Cole – Educational and Psychological Measurement, 2024
In real-world situations, multidimensional data may appear on large-scale tests or psychological surveys. The purpose of this study was to investigate the effects of the quantity and magnitude of cross-loadings and model specification on item parameter recovery in multidimensional Item Response Theory (MIRT) models, especially when the model was…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Grantee Submission, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. (2020) estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores,…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Journal of Educational Measurement, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores, including…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Han Du; Brian Keller; Egamaria Alacam; Craig Enders – Grantee Submission, 2023
In Bayesian statistics, the most widely used criteria of Bayesian model assessment and comparison are Deviance Information Criterion (DIC) and Watanabe-Akaike Information Criterion (WAIC). A multilevel mediation model is used as an illustrative example to compare different types of DIC and WAIC. More specifically, the study compares the…
Descriptors: Bayesian Statistics, Models, Comparative Analysis, Probability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kartal, Seval Kula – International Journal of Progressive Education, 2020
One of the aims of the current study is to specify the model providing the best fit to the data among the exploratory, the bifactor exploratory and the confirmatory structural equation models. The study compares the three models based on the model data fit statistics and item parameter estimations (factor loadings, cross-loadings, factor…
Descriptors: Learning Motivation, Measures (Individuals), Undergraduate Students, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Grantee Submission, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Journal of Educational Measurement, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Su-Young; Huh, David; Zhou, Zhengyang; Mun, Eun-Young – International Journal of Behavioral Development, 2020
Latent growth models (LGMs) are an application of structural equation modeling and frequently used in developmental and clinical research to analyze change over time in longitudinal outcomes. Maximum likelihood (ML), the most common approach for estimating LGMs, can fail to converge or may produce biased estimates in complex LGMs especially in…
Descriptors: Bayesian Statistics, Maximum Likelihood Statistics, Longitudinal Studies, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Kuhn, Jörg-Tobias; Wolgast, Anett – Journal of Educational Measurement, 2021
Van der Linden's hierarchical model for responses and response times can be used in order to infer the ability and mental speed of test takers from their responses and response times in an educational test. A standard approach for this is maximum likelihood estimation. In real-world applications, the data of some test takers might be partly…
Descriptors: Models, Reaction Time, Item Response Theory, Tests
Mohammed Alqabbaa – ProQuest LLC, 2021
Psychometricians at an organization named the Education and Training Evaluation Commission (ETEC) developed a new test scoring method called the latent D-scoring method (DSM-L) where it is believed that the new method itself is much easier and more efficient to use compared to the Item Response Theory (IRT) method. However, there are no studies…
Descriptors: Item Response Theory, Scoring, Item Analysis, Equated Scores
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  17