NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jiang, Zhehan; Raymond, Mark; DiStefano, Christine; Shi, Dexin; Liu, Ren; Sun, Junhua – Educational and Psychological Measurement, 2022
Computing confidence intervals around generalizability coefficients has long been a challenging task in generalizability theory. This is a serious practical problem because generalizability coefficients are often computed from designs where some facets have small sample sizes, and researchers have little guide regarding the trustworthiness of the…
Descriptors: Monte Carlo Methods, Intervals, Generalizability Theory, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Almehrizi, Rashid S. – Journal of Educational Measurement, 2021
Estimates of various variance components, universe score variance, measurement error variances, and generalizability coefficients, like all statistics, are subject to sampling variability, particularly in small samples. Such variability is quantified traditionally through estimated standard errors and/or confidence intervals. The paper derived new…
Descriptors: Error of Measurement, Statistics, Design, Generalizability Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Simsek, Ahmet Salih – International Journal of Assessment Tools in Education, 2023
Likert-type item is the most popular response format for collecting data in social, educational, and psychological studies through scales or questionnaires. However, there is no consensus on whether parametric or non-parametric tests should be preferred when analyzing Likert-type data. This study examined the statistical power of parametric and…
Descriptors: Error of Measurement, Likert Scales, Nonparametric Statistics, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Chih-Kai – Language Testing, 2017
Sparse-rated data are common in operational performance-based language tests, as an inevitable result of assigning examinee responses to a fraction of available raters. The current study investigates the precision of two generalizability-theory methods (i.e., the rating method and the subdividing method) specifically designed to accommodate the…
Descriptors: Data Analysis, Language Tests, Generalizability Theory, Accuracy
Chiu, Christopher W. T. – 2000
A procedure was developed to analyze data with missing observations by extracting data from a sparsely filled data matrix into analyzable smaller subsets of data. This subdividing method, based on the conceptual framework of meta-analysis, was accomplished by creating data sets that exhibit structural designs and then pooling variance components…
Descriptors: Difficulty Level, Error of Measurement, Generalizability Theory, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Zinbarg, Richard E.; Yovel, Iftah; Revelle, William; McDonald, Roderick P. – Applied Psychological Measurement, 2006
The extent to which a scale score generalizes to a latent variable common to all of the scale's indicators is indexed by the scale's general factor saturation. Seven techniques for estimating this parameter--omega[hierarchical] (omega[subscript h])--are compared in a series of simulated data sets. Primary comparisons were based on 160 artificial…
Descriptors: Computation, Factor Analysis, Reliability, Correlation