NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yan Xia; Selim Havan – Educational and Psychological Measurement, 2024
Although parallel analysis has been found to be an accurate method for determining the number of factors in many conditions with complete data, its application under missing data is limited. The existing literature recommends that, after using an appropriate multiple imputation method, researchers either apply parallel analysis to every imputed…
Descriptors: Data Interpretation, Factor Analysis, Statistical Inference, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, D. Roland; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is such doubt in research practice about the reliability of difference scores that granting agencies, journal editors, reviewers, and committees of graduate students' theses have been known to deplore their use. This most maligned index can be used in studies of change, growth, or perhaps discrepancy between two measures taken on the same…
Descriptors: Statistical Analysis, Reliability, Scores, Change
Peer reviewed Peer reviewed
Direct linkDirect link
Shear, Benjamin R.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Descriptors: Error of Measurement, Multiple Regression Analysis, Data Analysis, Computer Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Howell, Ryan T.; Shields, Alan L. – Educational and Psychological Measurement, 2008
Meta-analytic reliability generalizations (RGs) are limited by the scarcity of reliability reporting in primary articles, and currently, RG investigators lack a method to quantify the impact of such nonreporting. This article introduces a stepwise procedure to address this challenge. First, the authors introduce a formula that allows researchers…
Descriptors: Reliability, Meta Analysis, Generalization, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Zimmerman, Donald W. – Educational and Psychological Measurement, 2007
Properties of the Spearman correction for attenuation were investigated using Monte Carlo methods, under conditions where correlations between error scores exist as a population parameter and also where correlated errors arise by chance in random sampling. Equations allowing for all possible dependence among true and error scores on two tests at…
Descriptors: Monte Carlo Methods, Correlation, Sampling, Data Analysis
Peer reviewed Peer reviewed
Conger, Anthony J.; Ward, David G. – Educational and Psychological Measurement, 1984
Sixteen measures of reliability for two-category nominal scales are compared. Upon correcting for chance agreement, there are only five distinct indices: Fleiss's modification of A-sub-1, the phi coefficient, Cohen's kappa, and two intraclass coefficients. Recommendations for choosing an agreement index are made based on definitions, magnitude,…
Descriptors: Comparative Analysis, Correlation, Data Analysis, Mathematical Models
Peer reviewed Peer reviewed
McQuitty, Louis L.; Koch, Valerie L. – Educational and Psychological Measurement, 1976
A relatively reliable and valid hierarchy of clusters of objects is plotted from the highest column entries, exclusively, of a matrix of interassociations between the objects. Having developed out of a loose definition of types, the method isolates both loose and highly definitive types, and all those in between. (Author/RC)
Descriptors: Cluster Analysis, Cluster Grouping, Comparative Analysis, Data Analysis
Peer reviewed Peer reviewed
Mintz, Jim; Weidemann, Carl – Educational and Psychological Measurement, 1972
Procedure and program here described are designed to assess the reliability of J judges who are assigning N stimuli to one of K categories. (Authors)
Descriptors: Analysis of Variance, Classification, Computer Programs, Correlation