NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fatih Orcan – International Journal of Assessment Tools in Education, 2023
Among all, Cronbach's Alpha and McDonald's Omega are commonly used for reliability estimations. The alpha uses inter-item correlations while omega is based on a factor analysis result. This study uses simulated ordinal data sets to test whether the alpha and omega produce different estimates. Their performances were compared according to the…
Descriptors: Statistical Analysis, Monte Carlo Methods, Correlation, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cortes, Sylvester T.; Pineda, Hedeliza A.; Geverola, Immar Jun R. – Advanced Education, 2021
The instrument that assesses teachers' competence on AR methodology is limited. Thus, it is one of the issues concerning evaluating the effectiveness of a professional development program on designing AR projects. It is difficult to determine how much and what teachers have learned in a course or training. Thus, this cross-sectional study aimed to…
Descriptors: Factor Analysis, Teacher Competencies, Action Research, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Harbaugh, Allen G.; Liu, Min – AERA Online Paper Repository, 2017
This research examines the effects of nonattending response pattern contamination and select response style patterns on measures of model fit (CFI) and internal reliability (Cronbach's [alpha]). A simulation study examines the effects resulting from percentage of contamination, number of manifest items measured and sample size. Initial results…
Descriptors: Factor Analysis, Response Style (Tests), Goodness of Fit, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sengul Avsar, Asiye; Tavsancil, Ezel – Educational Sciences: Theory and Practice, 2017
This study analysed polytomous items' psychometric properties according to nonparametric item response theory (NIRT) models. Thus, simulated datasets--three different test lengths (10, 20 and 30 items), three sample distributions (normal, right and left skewed) and three samples sizes (100, 250 and 500)--were generated by conducting 20…
Descriptors: Test Items, Psychometrics, Nonparametric Statistics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Sessoms, John; Henson, Robert A. – Measurement: Interdisciplinary Research and Perspectives, 2018
Diagnostic classification models (DCMs) classify examinees based on the skills they have mastered given their test performance. This classification enables targeted feedback that can inform remedial instruction. Unfortunately, applications of DCMs have been criticized (e.g., no validity support). Generally, these evaluations have been brief and…
Descriptors: Literature Reviews, Classification, Models, Criticism