NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hayes, Timothy; Usami, Satoshi – Educational and Psychological Measurement, 2020
Recently, quantitative researchers have shown increased interest in two-step factor score regression (FSR) approaches to structural model estimation. A particularly promising approach proposed by Croon involves first extracting factor scores for each latent factor in a larger model, then correcting the variance-covariance matrix of the factor…
Descriptors: Regression (Statistics), Structural Equation Models, Statistical Bias, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Ippel, Lianne; Magis, David – Educational and Psychological Measurement, 2020
In dichotomous item response theory (IRT) framework, the asymptotic standard error (ASE) is the most common statistic to evaluate the precision of various ability estimators. Easy-to-use ASE formulas are readily available; however, the accuracy of some of these formulas was recently questioned and new ASE formulas were derived from a general…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Conger, Anthony J. – Educational and Psychological Measurement, 2017
Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa. Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another…
Descriptors: Interrater Reliability, Evaluators, Accuracy, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2016
The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete…
Descriptors: Test Theory, Item Response Theory, Models, Correlation
Peer reviewed Peer reviewed
Zimmerman, Donald W. – Educational and Psychological Measurement, 1983
A definition of test validity as the ratio of a covariance term to a variance term, analogous to the classical definition of test reliability, is proposed. When error scores on distinct tests are uncorrelated, the proposed definition coincides with the usual one, but it remains meaningful when error scores are correlated. (Author/BW)
Descriptors: Definitions, Mathematical Formulas, Mathematical Models, Test Theory
Peer reviewed Peer reviewed
Raju, Nambury S. – Educational and Psychological Measurement, 1982
A necessary and sufficient condition for a perfectly homogeneous test in the sense of Loevinger is stated and proved. Using this result, a formula for computing the maximum possible KR-20 when the test variance is assumed fixed is presented. A new index of test homogeneity is also presented and discussed. (Author/BW)
Descriptors: Mathematical Formulas, Mathematical Models, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Vegelius, Jan – Educational and Psychological Measurement, 1981
The G index is a measure of the similarity between individuals over dichotomous items. Some tests for the G-index are described. For each case an example is included. (Author/GK)
Descriptors: Hypothesis Testing, Mathematical Formulas, Mathematical Models, Nonparametric Statistics
Peer reviewed Peer reviewed
Feldt, Leonard S. – Educational and Psychological Measurement, 1984
The binomial error model includes form-to-form difficulty differences as error variance and leads to Ruder-Richardson formula 21 as an estimate of reliability. If the form-to-form component is removed from the estimate of error variance, the binomial model leads to KR 20 as the reliability estimate. (Author/BW)
Descriptors: Achievement Tests, Difficulty Level, Error of Measurement, Mathematical Formulas
Peer reviewed Peer reviewed
Charter, Richard A. – Educational and Psychological Measurement, 1982
Practical formulas for several analysis of variance (ANOVA) designs and models are presented which make it possible for readers to compute strength of association measures without the use of complete ANOVA tables. (Author/PN)
Descriptors: Analysis of Variance, Hypothesis Testing, Mathematical Formulas, Mathematical Models