NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 14,476 to 14,490 of 27,122 results Save | Export
Thomas, Hoben – Merrill Palmer Quart, 1970
Descriptors: Diagnostic Tests, Infants, Reliability, Tests
Zimmerman, Donald W. – Educ Psychol Meas, 1970
Results of this study indicate that the correlation between half-test scores over repeated splits, over persons, and over repeated testings resulting in different sets of observed scores, is given by Kuder-Richardson Formula 21. (RF)
Descriptors: Statistical Analysis, Statistics, Test Reliability, Tests
Peer reviewed Peer reviewed
Stallings, William M.; Gillmore, Gerald M. – Journal of Educational Measurement, 1971
Advocates the use of precision" rather than accuracy" in defining reliability. These terms are consistently differentiated in certain sciences. Review of psychological and measurement literature reveals, however, interchangeable usage of the terms in defining reliability. (Author/GS)
Descriptors: Definitions, Evaluation, Measurement, Reliability
Yost, Michael – NSPI Journal, 1971
Descriptors: Behavioral Objectives, Evaluation Criteria, Reliability, Validity
Peer reviewed Peer reviewed
Cureton, Edward E. – Educational and Psychological Measurement, 1971
Descriptors: Correlation, Factor Analysis, Reliability, Statistical Analysis
Peer reviewed Peer reviewed
Philip, Alistair E. – British Journal of Psychology, 1970
Descriptors: Analysis of Variance, Anxiety, Test Reliability
Peer reviewed Peer reviewed
Pepin, Arthur C. – Clearing House, 1971
Descriptors: Educational Testing, Intelligence Tests, Test Reliability
Peer reviewed Peer reviewed
Mandel, Robert; McLeod, Philip – Exceptional Children, 1970
Descriptors: Intelligence Tests, Socioeconomic Status, Test Reliability
Kroll, Water – Res Quart AAHPER, 1970
Descriptors: Error Patterns, Muscular Strength, Test Reliability
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1981
This paper describes and compares procedures for estimating the reliability of proficiency tests that are scored with latent structure models. Results suggest that the predictive estimate is the most accurate of the procedures. (Author/BW)
Descriptors: Criterion Referenced Tests, Scoring, Test Reliability
Peer reviewed Peer reviewed
Uebersax, John S. – Educational and Psychological Measurement, 1982
A more general method for calculating the Kappa measure of nominal rating agreement among multiple raters is presented. It can be used across a broad range of rating designs, including those in which raters vary with respect to their base rates and how many subjects they rate in common. (Author/BW)
Descriptors: Mathematical Formulas, Statistical Significance, Test Reliability
Peer reviewed Peer reviewed
Woodward, J. Arthur; Bentler, P. M. – Psychometrika, 1979
Expressions involving optimal sign vectors are derived so as to yield two new applications. First, coefficient alpha for the sign-weighted composite is maximized in analogy to Lord's scale-independent solution with differential weights. Second, optimal sign vectors are used to define two groups of objects that are maximally distinct. (Author/CTM)
Descriptors: Classification, Cluster Analysis, Reliability, Statistical Analysis
Peer reviewed Peer reviewed
Bergan, John R. – Journal of Educational Measurement, 1980
A coefficient of inter-rater agreement is presented which describes the magnitude of observer agreement as the probability estimated under a quasi-independence model that responses from different observers will be in agreement. (Author/JKS)
Descriptors: Measurement Techniques, Observation, Rating Scales, Reliability
Peer reviewed Peer reviewed
Willson, Victor L. – Educational and Psychological Measurement, 1980
Guilford's average interrater correlation coefficient is shown to be related to the Friedman Rank Sum statistic. Under the null hypothesis of zero correlation, the resultant distribution is known and the hypothesis can be tested. Large sample and tied score cases are also considered. An example from Guilford (1954) is presented. (Author)
Descriptors: Correlation, Hypothesis Testing, Mathematical Formulas, Reliability
Peer reviewed Peer reviewed
Kraemer, Helena Chmura – Journal of Educational Statistics, 1980
The robustness of hypothesis tests for the correlation coefficient under varying conditions is discussed. The effects of violations of the assumptions of linearity, homoscedasticity, and kurtosis are examined. (JKS)
Descriptors: Correlation, Hypothesis Testing, Reliability, Statistical Analysis
Pages: 1  |  ...  |  962  |  963  |  964  |  965  |  966  |  967  |  968  |  969  |  970  |  ...  |  1809