NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Cicchetti, Domenic V.; Fleiss, Joseph L. – Applied Psychological Measurement, 1977
The weighted kappa coefficient is a measure of interrater agreement when the relative seriousness of each possible disagreement can be quantified. This monte carlo study demonstrates the utility of the kappa coefficient for ordinal data. Sample size is also briefly discussed. (Author/JKS)
Descriptors: Mathematical Models, Rating Scales, Reliability, Sampling
Peer reviewed Peer reviewed
Humphreys, Lloyd G.; Drasgow, Fritz – Applied Psychological Measurement, 1989
Issues arising from difference scores with zero reliability that nevertheless allow a powerful test of change are discussed. Issues include the appropriateness of underlying statistical models for psychological data and the relationship between difference scores and power. Increases in reliability always increase power for a fixed effect size.…
Descriptors: Goodness of Fit, Mathematical Models, Power (Statistics), Psychometrics
Peer reviewed Peer reviewed
Whitely, Susan E. – Applied Psychological Measurement, 1979
A model which gives maximum likelihood estimates of measurement error within the context of a simplex model for practice effects is presented. The appropriateness of the model is tested for five traits, and error estimates are compared to the classical formula estimates. (Author/JKS)
Descriptors: Error of Measurement, Error Patterns, Higher Education, Mathematical Models
Peer reviewed Peer reviewed
Kaiser, Henry F.; Serlin, Ronald C. – Applied Psychological Measurement, 1978
A least-squares solution for the method of paired comparisons is given. The approach provokes a theorem regarding the amount of data necessary and sufficient for a solution to be obtained. A measure of the internal consistency of the least-squares fit is developed. (Author/CTM)
Descriptors: Higher Education, Least Squares Statistics, Mathematical Models, Measurement
Peer reviewed Peer reviewed
Kane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Zimmerman, Donald W.; And Others – Applied Psychological Measurement, 1993
Some of the methods originally used to find relationships between reliability and power associated with a single measurement are extended to difference scores. Results, based on explicit power calculations, show that augmenting the reliability of measurement by reducing error score variance can make significance tests of difference more powerful.…
Descriptors: Equations (Mathematics), Error of Measurement, Individual Differences, Mathematical Models
Peer reviewed Peer reviewed
Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1977
Textbook calculations of statistical power or sample size follow from formulas that assume that the variables under consideration are measured without error. However, in the real world of behavioral research, errors of measurement cannot be neglected. The determination of sample size is discussed, and an example illustrates blocking strategy.…
Descriptors: Analysis of Covariance, Analysis of Variance, Error of Measurement, Hypothesis Testing
Peer reviewed Peer reviewed
Rozeboom, William W. – Applied Psychological Measurement, 1989
Formulas are provided for estimating the reliability of a linear composite of non-equivalent subtests given the reliabilities of component subtests. The reliability of the composite is compared to that of its components. An empirical example uses data from 170 children aged 4 through 8 years performing 34 Piagetian tasks. (SLD)
Descriptors: Elementary School Students, Equations (Mathematics), Estimation (Mathematics), Mathematical Models
Peer reviewed Peer reviewed
Eiting, Mindert H. – Applied Psychological Measurement, 1991
A method is proposed for sequential evaluation of reliability of psychometric instruments. Sample size is unfixed; a test statistic is computed after each person is sampled and a decision is made in each stage of the sampling process. Results from a series of Monte-Carlo experiments establish the method's efficiency. (SLD)
Descriptors: Computer Simulation, Equations (Mathematics), Estimation (Mathematics), Mathematical Models