NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 14,476 to 14,490 of 27,122 results Save | Export
Peer reviewed Peer reviewed
Stallings, William M.; Gillmore, Gerald M. – Journal of Educational Measurement, 1971
Advocates the use of precision" rather than accuracy" in defining reliability. These terms are consistently differentiated in certain sciences. Review of psychological and measurement literature reveals, however, interchangeable usage of the terms in defining reliability. (Author/GS)
Descriptors: Definitions, Evaluation, Measurement, Reliability
Yost, Michael – NSPI Journal, 1971
Descriptors: Behavioral Objectives, Evaluation Criteria, Reliability, Validity
Peer reviewed Peer reviewed
Cureton, Edward E. – Educational and Psychological Measurement, 1971
Descriptors: Correlation, Factor Analysis, Reliability, Statistical Analysis
Peer reviewed Peer reviewed
Philip, Alistair E. – British Journal of Psychology, 1970
Descriptors: Analysis of Variance, Anxiety, Test Reliability
Peer reviewed Peer reviewed
Pepin, Arthur C. – Clearing House, 1971
Descriptors: Educational Testing, Intelligence Tests, Test Reliability
Peer reviewed Peer reviewed
Mandel, Robert; McLeod, Philip – Exceptional Children, 1970
Descriptors: Intelligence Tests, Socioeconomic Status, Test Reliability
Kroll, Water – Res Quart AAHPER, 1970
Descriptors: Error Patterns, Muscular Strength, Test Reliability
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1981
This paper describes and compares procedures for estimating the reliability of proficiency tests that are scored with latent structure models. Results suggest that the predictive estimate is the most accurate of the procedures. (Author/BW)
Descriptors: Criterion Referenced Tests, Scoring, Test Reliability
Peer reviewed Peer reviewed
Uebersax, John S. – Educational and Psychological Measurement, 1982
A more general method for calculating the Kappa measure of nominal rating agreement among multiple raters is presented. It can be used across a broad range of rating designs, including those in which raters vary with respect to their base rates and how many subjects they rate in common. (Author/BW)
Descriptors: Mathematical Formulas, Statistical Significance, Test Reliability
Peer reviewed Peer reviewed
Woodward, J. Arthur; Bentler, P. M. – Psychometrika, 1979
Expressions involving optimal sign vectors are derived so as to yield two new applications. First, coefficient alpha for the sign-weighted composite is maximized in analogy to Lord's scale-independent solution with differential weights. Second, optimal sign vectors are used to define two groups of objects that are maximally distinct. (Author/CTM)
Descriptors: Classification, Cluster Analysis, Reliability, Statistical Analysis
Peer reviewed Peer reviewed
Bergan, John R. – Journal of Educational Measurement, 1980
A coefficient of inter-rater agreement is presented which describes the magnitude of observer agreement as the probability estimated under a quasi-independence model that responses from different observers will be in agreement. (Author/JKS)
Descriptors: Measurement Techniques, Observation, Rating Scales, Reliability
Peer reviewed Peer reviewed
Willson, Victor L. – Educational and Psychological Measurement, 1980
Guilford's average interrater correlation coefficient is shown to be related to the Friedman Rank Sum statistic. Under the null hypothesis of zero correlation, the resultant distribution is known and the hypothesis can be tested. Large sample and tied score cases are also considered. An example from Guilford (1954) is presented. (Author)
Descriptors: Correlation, Hypothesis Testing, Mathematical Formulas, Reliability
Peer reviewed Peer reviewed
Kraemer, Helena Chmura – Journal of Educational Statistics, 1980
The robustness of hypothesis tests for the correlation coefficient under varying conditions is discussed. The effects of violations of the assumptions of linearity, homoscedasticity, and kurtosis are examined. (JKS)
Descriptors: Correlation, Hypothesis Testing, Reliability, Statistical Analysis
Brandt, D. Scott – Computers in Libraries, 1996
Evaluation of information found on the Internet requires the same assessment of reliability, credibility, perspective, purpose and author credentials as required with print materials. Things to check include whether the source is from a moderated or unmoderated list or FTP (file transfer protocol) site; directories for affiliation and biographical…
Descriptors: Evaluation Criteria, Information Sources, Internet, Reliability
Peer reviewed Peer reviewed
Barnes, Laura L. B.; Harp, Diane; Jung, Woo Sik – Educational and Psychological Measurement, 2002
Conducted a reliability generalization study for the State-Trait Anxiety Inventory (C. Spielberger, 1983) by reviewing and classifying 816 research articles. Average reliability coefficients were acceptable for both internal consistency and test-retest reliability, but variation was present among the estimates. Other differences are discussed.…
Descriptors: Adults, Anxiety, Generalization, Meta Analysis
Pages: 1  |  ...  |  962  |  963  |  964  |  965  |  966  |  967  |  968  |  969  |  970  |  ...  |  1809