Publication Date
In 2025 | 1 |
Since 2024 | 9 |
Descriptor
Comparative Analysis | 9 |
Monte Carlo Methods | 9 |
Accuracy | 5 |
Test Items | 5 |
Item Analysis | 4 |
Item Response Theory | 4 |
Bayesian Statistics | 3 |
Error of Measurement | 3 |
Sample Size | 3 |
Classification | 2 |
Correlation | 2 |
More ▼ |
Source
Grantee Submission | 3 |
Applied Measurement in… | 1 |
Educational and Psychological… | 1 |
International Journal of… | 1 |
Journal of Educational and… | 1 |
Journal of Experimental… | 1 |
Structural Equation Modeling:… | 1 |
Author
Ke-Hai Yuan | 2 |
Zhiyong Zhang | 2 |
Allan S. Cohen | 1 |
Audrey J. Leroux | 1 |
Carson Keeter | 1 |
Douglas Clements | 1 |
Eray Selçuk | 1 |
Ergül Demir | 1 |
Jiashan Tang | 1 |
Julie Sarama | 1 |
Katerina M. Marcoulides | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 8 |
Information Analyses | 1 |
Reports - Evaluative | 1 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Kindergarten | 1 |
Primary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Yongseok Lee; Walter L. Leite; Audrey J. Leroux – Journal of Experimental Education, 2024
In the current study, we compare propensity score (PS) matching methods for data with a cross-classified structure, where each individual is clustered within more than one group, but the groups are not hierarchically organized. Through a Monte Carlo simulation study, we compared sequential cluster matching (SCM), preferential within cluster…
Descriptors: Comparative Analysis, Data Analysis, Groups, Classification
Lingbo Tong; Wen Qu; Zhiyong Zhang – Grantee Submission, 2025
Factor analysis is widely utilized to identify latent factors underlying the observed variables. This paper presents a comprehensive comparative study of two widely used methods for determining the optimal number of factors in factor analysis, the K1 rule, and parallel analysis, along with a more recently developed method, the bass-ackward method.…
Descriptors: Factor Analysis, Monte Carlo Methods, Statistical Analysis, Sample Size
Shunji Wang; Katerina M. Marcoulides; Jiashan Tang; Ke-Hai Yuan – Structural Equation Modeling: A Multidisciplinary Journal, 2024
A necessary step in applying bi-factor models is to evaluate the need for domain factors with a general factor in place. The conventional null hypothesis testing (NHT) was commonly used for such a purpose. However, the conventional NHT meets challenges when the domain loadings are weak or the sample size is insufficient. This article proposes…
Descriptors: Hypothesis Testing, Error of Measurement, Comparative Analysis, Monte Carlo Methods
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Lei Guo; Wenjie Zhou; Xiao Li – Journal of Educational and Behavioral Statistics, 2024
The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees' responses to MC items instead of dichotomously scored…
Descriptors: Multiple Choice Tests, Diagnostic Tests, Accuracy, Computer Software
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Ke-Hai Yuan; Zhiyong Zhang – Grantee Submission, 2024
Data in social and behavioral sciences typically contain measurement errors and also do not have predefined metrics. Structural equation modeling (SEM) is commonly used to analyze such data. This article discuss issues in latent-variable modeling as compared to regression analysis with composite-scores. Via logical reasoning and analytical results…
Descriptors: Error of Measurement, Measurement Techniques, Social Science Research, Behavioral Science Research
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification
Pavel Chernyavskiy; Traci S. Kutaka; Carson Keeter; Julie Sarama; Douglas Clements – Grantee Submission, 2024
When researchers code behavior that is undetectable or falls outside of the validated ordinal scale, the resultant outcomes often suffer from informative missingness. Incorrect analysis of such data can lead to biased arguments around efficacy and effectiveness in the context of experimental and intervention research. Here, we detail a new…
Descriptors: Bayesian Statistics, Mathematics Instruction, Learning Trajectories, Item Response Theory