NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jihong Zhang; Jonathan Templin; Xinya Liang – Journal of Educational Measurement, 2024
Recently, Bayesian diagnostic classification modeling has been becoming popular in health psychology, education, and sociology. Typically information criteria are used for model selection when researchers want to choose the best model among alternative models. In Bayesian estimation, posterior predictive checking is a flexible Bayesian model…
Descriptors: Bayesian Statistics, Cognitive Measurement, Models, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Johan Lyrvall; Zsuzsa Bakk; Jennifer Oser; Roberto Di Mari – Structural Equation Modeling: A Multidisciplinary Journal, 2024
We present a bias-adjusted three-step estimation approach for multilevel latent class models (LC) with covariates. The proposed approach involves (1) fitting a single-level measurement model while ignoring the multilevel structure, (2) assigning units to latent classes, and (3) fitting the multilevel model with the covariates while controlling for…
Descriptors: Hierarchical Linear Modeling, Statistical Bias, Error of Measurement, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
de Jong, Valentijn M. T.; Campbell, Harlan; Maxwell, Lauren; Jaenisch, Thomas; Gustafson, Paul; Debray, Thomas P. A. – Research Synthesis Methods, 2023
A common problem in the analysis of multiple data sources, including individual participant data meta-analysis (IPD-MA), is the misclassification of binary variables. Misclassification may lead to biased estimators of model parameters, even when the misclassification is entirely random. We aimed to develop statistical methods that facilitate…
Descriptors: Classification, Meta Analysis, Bayesian Statistics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Kazuhiro Yamaguchi – Journal of Educational and Behavioral Statistics, 2025
This study proposes a Bayesian method for diagnostic classification models (DCMs) for a partially known Q-matrix setting between exploratory and confirmatory DCMs. This Q-matrix setting is practical and useful because test experts have pre-knowledge of the Q-matrix but cannot readily specify it completely. The proposed method employs priors for…
Descriptors: Models, Classification, Bayesian Statistics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel McNeish; Patrick D. Manapat – Structural Equation Modeling: A Multidisciplinary Journal, 2024
A recent review found that 11% of published factor models are hierarchical models with second-order factors. However, dedicated recommendations for evaluating hierarchical model fit have yet to emerge. Traditional benchmarks like RMSEA <0.06 or CFI >0.95 are often consulted, but they were never intended to generalize to hierarchical models.…
Descriptors: Factor Analysis, Goodness of Fit, Hierarchical Linear Modeling, Benchmarking
Jihong Zhang – ProQuest LLC, 2022
Recently, Bayesian diagnostic classification modeling has been becoming popular in health psychology, education, and sociology. Typically information criteria are used for model selection when researchers want to choose the best model among alternative models. In Bayesian estimation, posterior predictive checking is a flexible Bayesian model…
Descriptors: Bayesian Statistics, Cognitive Measurement, Models, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Xinya; Cao, Chunhua – Journal of Experimental Education, 2023
To evaluate multidimensional factor structure, a popular method that combines features of confirmatory and exploratory factor analysis is Bayesian structural equation modeling with small-variance normal priors (BSEM-N). This simulation study evaluated BSEM-N as a variable selection and parameter estimation tool in factor analysis with sparse…
Descriptors: Factor Analysis, Bayesian Statistics, Structural Equation Models, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koyuncu, Ilhan; Kilic, Abdullah Faruk – International Journal of Assessment Tools in Education, 2021
In exploratory factor analysis, although the researchers decide which items belong to which factors by considering statistical results, the decisions taken sometimes can be subjective in case of having items with similar factor loadings and complex factor structures. The aim of this study was to examine the validity of classifying items into…
Descriptors: Classification, Graphs, Factor Analysis, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Lamprianou, Iasonas – Educational and Psychological Measurement, 2018
It is common practice for assessment programs to organize qualifying sessions during which the raters (often known as "markers" or "judges") demonstrate their consistency before operational rating commences. Because of the high-stakes nature of many rating activities, the research community tends to continuously explore new…
Descriptors: Social Networks, Network Analysis, Comparative Analysis, Innovation
Peer reviewed Peer reviewed
Direct linkDirect link
Porter, Kristin E.; Reardon, Sean F.; Unlu, Fatih; Bloom, Howard S.; Cimpian, Joseph R. – Journal of Research on Educational Effectiveness, 2017
A valuable extension of the single-rating regression discontinuity design (RDD) is a multiple-rating RDD (MRRDD). To date, four main methods have been used to estimate average treatment effects at the multiple treatment frontiers of an MRRDD: the "surface" method, the "frontier" method, the "binding-score" method, and…
Descriptors: Regression (Statistics), Intervention, Quasiexperimental Design, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kranzler, John H.; Floyd, Randy G.; Benson, Nicholas; Zaboski, Brian; Thibodaux, Lia – International Journal of School & Educational Psychology, 2016
In this rejoinder, the authors describe the aim of the original study as an effort to conduct a critical test of an important postulate underlying the Cross-Battery Assessment PSW approach (XBA PSW; Kranzler, Floyd, Benson, Zaboski, & Thibodaux, this issue). The authors used classification agreement analysis to examine the concordance between…
Descriptors: Identification, Learning Disabilities, Criticism, Evidence Based Practice
Peer reviewed Peer reviewed
Direct linkDirect link
Stamey, James D.; Beavers, Daniel P.; Sherr, Michael E. – Sociological Methods & Research, 2017
Survey data are often subject to various types of errors such as misclassification. In this article, we consider a model where interest is simultaneously in two correlated response variables and one is potentially subject to misclassification. A motivating example of a recent study of the impact of a sexual education course for adolescents is…
Descriptors: Bayesian Statistics, Classification, Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Lathrop, Quinn N.; Cheng, Ying – Journal of Educational Measurement, 2014
When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…
Descriptors: Cutting Scores, Classification, Computation, Nonparametric Statistics
González-Brenes, José P.; Huang, Yun – International Educational Data Mining Society, 2015
Classification evaluation metrics are often used to evaluate adaptive tutoring systems-- programs that teach and adapt to humans. Unfortunately, it is not clear how intuitive these metrics are for practitioners with little machine learning background. Moreover, our experiments suggest that existing convention for evaluating tutoring systems may…
Descriptors: Intelligent Tutoring Systems, Evaluation Methods, Program Evaluation, Student Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich – Psychological Methods, 2011
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…
Descriptors: Simulation, Educational Psychology, Social Sciences, Measurement
Previous Page | Next Page »
Pages: 1  |  2