NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbo, Bruno D.; Kroc, Edward – Educational and Psychological Measurement, 2019
Chalmers recently published a critique of the use of ordinal a[alpha] proposed in Zumbo et al. as a measure of test reliability in certain research settings. In this response, we take up the task of refuting Chalmers' critique. We identify three broad misconceptions that characterize Chalmers' criticisms: (1) confusing assumptions with…
Descriptors: Test Reliability, Statistical Analysis, Misconceptions, Mathematical Models
Peer reviewed Peer reviewed
McDonald, Roderick P. – Educational and Psychological Measurement, 1978
It is shown that if a behavior domain can be described by the common factor model with a finite number of factors, the squared correlation between the sum of a selection of items and the domain total score is actually greater than coefficient alpha. (Author/JKS)
Descriptors: Factor Analysis, Item Analysis, Mathematical Models, Measurement
Peer reviewed Peer reviewed
Bowers, John – Educational and Psychological Measurement, 1971
Descriptors: Error of Measurement, Mathematical Models, Test Reliability, True Scores
Peer reviewed Peer reviewed
Raju, Nambury S. – Educational and Psychological Measurement, 1982
A necessary and sufficient condition for a perfectly homogeneous test in the sense of Loevinger is stated and proved. Using this result, a formula for computing the maximum possible KR-20 when the test variance is assumed fixed is presented. A new index of test homogeneity is also presented and discussed. (Author/BW)
Descriptors: Mathematical Formulas, Mathematical Models, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Brennan, Robert L.; Prediger, Dale J. – Educational and Psychological Measurement, 1981
This paper considers some appropriate and inappropriate uses of coefficient kappa and alternative kappa-like statistics. Discussion is restricted to the descriptive characteristics of these statistics for measuring agreement with categorical data in studies of reliability and validity. (Author)
Descriptors: Classification, Error of Measurement, Mathematical Models, Test Reliability
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1982
Results in the engineering literature on "k out of n system reliability" can be used to characterize tests based on estimates of the probability of correctly determining whether the examinee knows the correct response. In particular, the minimum number of distractors required for multiple-choice tests can be empirically determined.…
Descriptors: Achievement Tests, Mathematical Models, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Gati, Itamar – Educational and Psychological Measurement, 1981
This paper examines the properties of the Item Efficiency Index proposed by Neill and Jackson (1976; EJ 137 077) for minimum redundancy item analysis. (Author/BW)
Descriptors: Correlation, Factor Structure, Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
Raju, Nambury S. – Educational and Psychological Measurement, 1977
A rederivation of Lord's formula for estimating variance in multiple matrix sampling is presented as well as the ways Cronbach's coefficient alpha and the Spearman-Brown prophecy formula are related in this context. (Author/JKS)
Descriptors: Analysis of Variance, Comparative Analysis, Item Sampling, Mathematical Models
Peer reviewed Peer reviewed
Yarnold, Paul R. – Educational and Psychological Measurement, 1984
Unreliable profiles impose the difficulty that ordinal and interval relations among the individual's scores become uncertain or unstable. A profile reliability coefficient is derived to estimate the relative expected extent of this ordinal and interval "inversion" for any profile of K measures. (Author/DWH)
Descriptors: Error of Measurement, Mathematical Models, Profiles, Test Reliability
Peer reviewed Peer reviewed
Jones, W. Paul – Educational and Psychological Measurement, 1991
A Bayesian alternative to interpretations based on classical reliability theory is presented. Procedures are detailed for calculation of a posterior score and credible interval with joint consideration of item sample and occasion error. (Author/SLD)
Descriptors: Bayesian Statistics, Equations (Mathematics), Mathematical Models, Statistical Inference
Peer reviewed Peer reviewed
Werts, C. E.; And Others – Educational and Psychological Measurement, 1978
A procedure for estimating the reliability of a factorially complex composite is considered. An application of its use with Scholastic Aptitude Test data is provided. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Mathematical Models, Matrices
Peer reviewed Peer reviewed
Schmidt, Frank L. – Educational and Psychological Measurement, 1977
Urry's procedure for approximating latent trait test models is shown to tend to underestimate item discriminatory power and overestimate item difficulty. A method for correcting these biases is provided, and implications of the procedures are discussed. (Author/JKS)
Descriptors: Item Analysis, Latent Trait Theory, Mathematical Models, Test Bias
Peer reviewed Peer reviewed
Werts, Charles E.; Linn, Robert L. – Educational and Psychological Measurement, 1972
Descriptors: Analysis of Variance, Correlation, Factor Analysis, Mathematical Models
Peer reviewed Peer reviewed
Zimmerman, Donald W.; And Others – Educational and Psychological Measurement, 1993
Coefficient alpha was examined through computer simulation as an estimate of test reliability under violation of two assumptions. Coefficient alpha underestimated reliability under violation of the assumption of essential tau-equivalence of subtest scores and overestimated it under violation of the assumption of uncorrelated subtest error scores.…
Descriptors: Computer Simulation, Estimation (Mathematics), Mathematical Models, Robustness (Statistics)
Peer reviewed Peer reviewed
Whitely, Susan E.; Dawis, Rene V. – Educational and Psychological Measurement, 1976
Systematically investigates the effects of test context on verbal analogy item difficulty, in terms of both simple percentage correct and easiness estimates from a parameter-invariant model (Rasch, 1960). (RC)
Descriptors: Analysis of Variance, High School Students, Item Analysis, Mathematical Models
Previous Page | Next Page ยป
Pages: 1  |  2