NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yasuhiro Yamamoto; Yasuo Miyazaki – Journal of Experimental Education, 2025
Bayesian methods have been said to solve small sample problems in frequentist methods by reflecting prior knowledge in the prior distribution. However, there are dangers in strongly reflecting prior knowledge or situations where much prior knowledge cannot be used. In order to address the issue, in this article, we considered to apply two Bayesian…
Descriptors: Sample Size, Hierarchical Linear Modeling, Bayesian Statistics, Prior Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Tenko Raykov; Christine DiStefano; Lisa Calvocoressi – Educational and Psychological Measurement, 2024
This note demonstrates that the widely used Bayesian Information Criterion (BIC) need not be generally viewed as a routinely dependable index for model selection when the bifactor and second-order factor models are examined as rival means for data description and explanation. To this end, we use an empirically relevant setting with…
Descriptors: Bayesian Statistics, Models, Decision Making, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Na Shan; Ping-Feng Xu – Journal of Educational and Behavioral Statistics, 2025
The detection of differential item functioning (DIF) is important in psychological and behavioral sciences. Standard DIF detection methods perform an item-by-item test iteratively, often assuming that all items except the one under investigation are DIF-free. This article proposes a Bayesian adaptive Lasso method to detect DIF in graded response…
Descriptors: Bayesian Statistics, Item Response Theory, Adolescents, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Fangxing Bai; Ben Kelcey – Society for Research on Educational Effectiveness, 2024
Purpose and Background: Despite the flexibility of multilevel structural equation modeling (MLSEM), a practical limitation many researchers encounter is how to effectively estimate model parameters with typical sample sizes when there are many levels of (potentially disparate) nesting. We develop a method-of-moment corrected maximum likelihood…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Sample Size, Faculty Development
Peer reviewed Peer reviewed
Direct linkDirect link
Ben Kelcey; Fangxing Bai; Amota Ataneka; Yanli Xie; Kyle Cox – Society for Research on Educational Effectiveness, 2024
We develop a structural after measurement (SAM) method for structural equation models (SEMs) that accommodates missing data. The results show that the proposed SAM missing data estimator outperforms conventional full information (FI) estimators in terms of convergence, bias, and root-mean-square-error in small-to-moderate samples or large samples…
Descriptors: Structural Equation Models, Research Problems, Error of Measurement, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Önen, Emine – Universal Journal of Educational Research, 2019
This simulation study was conducted to compare the performances of Frequentist and Bayesian approaches in the context of power to detect model misspecification in terms of omitted cross-loading in CFA models with respect to the several variables (number of omitted cross-loading, magnitude of main loading, number of factors, number of indicators…
Descriptors: Factor Analysis, Bayesian Statistics, Comparative Analysis, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Jiajing; Liang, Xinya; Yang, Yanyun – AERA Online Paper Repository, 2017
In Bayesian structural equation modeling (BSEM), prior settings may affect model fit, parameter estimation, and model comparison. This simulation study was to investigate how the priors impact evaluation of relative fit across competing models. The design factors for data generation included sample sizes, factor structures, data distributions, and…
Descriptors: Bayesian Statistics, Structural Equation Models, Goodness of Fit, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Soo; Suh, Youngsuk – Journal of Educational Measurement, 2018
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Descriptors: Item Response Theory, Sample Size, Models, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Sen, Sedat – International Journal of Testing, 2018
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Descriptors: Item Response Theory, Comparative Analysis, Computation, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel M. – Journal of Educational and Behavioral Statistics, 2016
Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…
Descriptors: Models, Statistical Analysis, Hierarchical Linear Modeling, Sample Size
Lamsal, Sunil – ProQuest LLC, 2015
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Descriptors: Item Response Theory, Monte Carlo Methods, Maximum Likelihood Statistics, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Solomon, Benjamin G.; Forsberg, Ole J. – School Psychology Quarterly, 2017
Bayesian techniques have become increasingly present in the social sciences, fueled by advances in computer speed and the development of user-friendly software. In this paper, we forward the use of Bayesian Asymmetric Regression (BAR) to monitor intervention responsiveness when using Curriculum-Based Measurement (CBM) to assess oral reading…
Descriptors: Bayesian Statistics, Regression (Statistics), Least Squares Statistics, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stone, Clement A.; Tang, Yun – Practical Assessment, Research & Evaluation, 2013
Propensity score applications are often used to evaluate educational program impact. However, various options are available to estimate both propensity scores and construct comparison groups. This study used a student achievement dataset with commonly available covariates to compare different propensity scoring estimation methods (logistic…
Descriptors: Comparative Analysis, Probability, Sample Size, Program Evaluation
Previous Page | Next Page »
Pages: 1  |  2