NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Cicchetti, Domenic V.; Fleiss, Joseph L. – Applied Psychological Measurement, 1977
The weighted kappa coefficient is a measure of interrater agreement when the relative seriousness of each possible disagreement can be quantified. This monte carlo study demonstrates the utility of the kappa coefficient for ordinal data. Sample size is also briefly discussed. (Author/JKS)
Descriptors: Mathematical Models, Rating Scales, Reliability, Sampling
Peer reviewed Peer reviewed
Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1977
Textbook calculations of statistical power or sample size follow from formulas that assume that the variables under consideration are measured without error. However, in the real world of behavioral research, errors of measurement cannot be neglected. The determination of sample size is discussed, and an example illustrates blocking strategy.…
Descriptors: Analysis of Covariance, Analysis of Variance, Error of Measurement, Hypothesis Testing
Peer reviewed Peer reviewed
Eiting, Mindert H. – Applied Psychological Measurement, 1991
A method is proposed for sequential evaluation of reliability of psychometric instruments. Sample size is unfixed; a test statistic is computed after each person is sampled and a decision is made in each stage of the sampling process. Results from a series of Monte-Carlo experiments establish the method's efficiency. (SLD)
Descriptors: Computer Simulation, Equations (Mathematics), Estimation (Mathematics), Mathematical Models