NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 94 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Han Du; Brian Keller; Egamaria Alacam; Craig Enders – Grantee Submission, 2023
In Bayesian statistics, the most widely used criteria of Bayesian model assessment and comparison are Deviance Information Criterion (DIC) and Watanabe-Akaike Information Criterion (WAIC). A multilevel mediation model is used as an illustrative example to compare different types of DIC and WAIC. More specifically, the study compares the…
Descriptors: Bayesian Statistics, Models, Comparative Analysis, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Su-Young; Huh, David; Zhou, Zhengyang; Mun, Eun-Young – International Journal of Behavioral Development, 2020
Latent growth models (LGMs) are an application of structural equation modeling and frequently used in developmental and clinical research to analyze change over time in longitudinal outcomes. Maximum likelihood (ML), the most common approach for estimating LGMs, can fail to converge or may produce biased estimates in complex LGMs especially in…
Descriptors: Bayesian Statistics, Maximum Likelihood Statistics, Longitudinal Studies, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Fangxing Bai; Ben Kelcey – Society for Research on Educational Effectiveness, 2024
Purpose and Background: Despite the flexibility of multilevel structural equation modeling (MLSEM), a practical limitation many researchers encounter is how to effectively estimate model parameters with typical sample sizes when there are many levels of (potentially disparate) nesting. We develop a method-of-moment corrected maximum likelihood…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Sample Size, Faculty Development
Peer reviewed Peer reviewed
Direct linkDirect link
Ben Kelcey; Fangxing Bai; Amota Ataneka; Yanli Xie; Kyle Cox – Society for Research on Educational Effectiveness, 2024
We develop a structural after measurement (SAM) method for structural equation models (SEMs) that accommodates missing data. The results show that the proposed SAM missing data estimator outperforms conventional full information (FI) estimators in terms of convergence, bias, and root-mean-square-error in small-to-moderate samples or large samples…
Descriptors: Structural Equation Models, Research Problems, Error of Measurement, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Mulder, J.; Raftery, A. E. – Sociological Methods & Research, 2022
The Schwarz or Bayesian information criterion (BIC) is one of the most widely used tools for model comparison in social science research. The BIC, however, is not suitable for evaluating models with order constraints on the parameters of interest. This article explores two extensions of the BIC for evaluating order-constrained models, one where a…
Descriptors: Models, Social Science Research, Programming Languages, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Ning, Ling; Luo, Wen – Journal of Experimental Education, 2018
Piecewise GMM with unknown turning points is a new procedure to investigate heterogeneous subpopulations' growth trajectories consisting of distinct developmental phases. Unlike the conventional PGMM, which relies on theory or experiment design to specify turning points a priori, the new procedure allows for an optimal location of turning points…
Descriptors: Statistical Analysis, Models, Classification, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Levy, Roy – Educational Psychologist, 2016
In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…
Descriptors: Bayesian Statistics, Models, Educational Research, Innovation
Peer reviewed Peer reviewed
Direct linkDirect link
Savalei, Victoria; Rhemtulla, Mijke – Journal of Educational and Behavioral Statistics, 2017
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately…
Descriptors: Computation, Statistical Analysis, Test Items, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Jackson, Dan; Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose – Research Synthesis Methods, 2017
Network meta-analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta-analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between-study heterogeneity. Models for network meta-analysis with random…
Descriptors: Meta Analysis, Network Analysis, Comparative Analysis, Outcomes of Treatment
Peer reviewed Peer reviewed
Direct linkDirect link
Zeller, Florian; Krampen, Dorothea; Reiß, Siegbert; Schweizer, Karl – Educational and Psychological Measurement, 2017
The item-position effect describes how an item's position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine…
Descriptors: Goodness of Fit, Simulation, Factor Analysis, Intelligence Tests
Koziol, Natalie A.; Bovaird, James A. – Educational and Psychological Measurement, 2018
Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…
Descriptors: Computation, Tests, Error of Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Soo; Suh, Youngsuk – Journal of Educational Measurement, 2018
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Descriptors: Item Response Theory, Sample Size, Models, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Su, Shu-Ching; Sedory, Stephen A.; Singh, Sarjinder – Sociological Methods & Research, 2015
In this article, we adjust the Kuk randomized response model for collecting information on a sensitive characteristic for increased protection and efficiency by making use of forced "yes" and forced "no" responses. We first describe Kuk's model and then the proposed adjustment to Kuk's model. Next, by means of a simulation…
Descriptors: Data Collection, Models, Responses, Efficiency
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7