NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Furtado, Ovande, Jr.; Gallagher, Jere D. – Research Quarterly for Exercise and Sport, 2012
Mastery of fundamental movement skills (FMS) is an important factor in preventing weight gain and increasing physical activity. To master FMS, performance evaluation is necessary. In this study, we investigated the reliability of a new observational assessment tool. In Phase I, 110 video clips of children performing five locomotor, and six…
Descriptors: Classification, Psychomotor Skills, Basic Skills, Reliability
Peer reviewed Peer reviewed
Cicchetti, Domenic V.; Fleiss, Joseph L. – Applied Psychological Measurement, 1977
The weighted kappa coefficient is a measure of interrater agreement when the relative seriousness of each possible disagreement can be quantified. This monte carlo study demonstrates the utility of the kappa coefficient for ordinal data. Sample size is also briefly discussed. (Author/JKS)
Descriptors: Mathematical Models, Rating Scales, Reliability, Sampling
Hendel, Darwin D.; Weiss, David J. – Educ Psychol Meas, 1970
It would appear that traditional modelsof reliability, in which reliability estimates for an individual are estimated from group data, could yiel more accurate estimates if individual difference variables, such as response consistency, were taken into consideration in the estimation of reliability. (DG)
Descriptors: Individual Differences, Measurement, Measurement Techniques, Rating Scales
Enger, John M.; Whitney, Douglas R. – 1975
There are few existing or widely known measures of agreement applicable when data is nominal or categorical. Most such coefficients are applicable only when judges classify objects or subjects into a single category. A wider range of applications, including those where judges (1) place probabilities on subjects belonging to mutually exclusive and…
Descriptors: Analysis of Variance, Classification, Measurement Techniques, Models
Newtson, Darren; And Others – 1976
Two five-week test-retest reliability studies of a measure of the unit of perception of ongoing behavior were conducted. In the first, 25 females and 23 males segmented a 7-minute action sequence under fine-unit or gross-unit instructional sets. Number of units marked at first viewing correlated .87 with number of units at retest. Correlations…
Descriptors: Attribution Theory, Behavior Patterns, Behavior Rating Scales, Cognitive Processes
Primoff, Ernest S. – 1971
This report shows how Beta weights for the J-Coefficient may be easily developed without a formal validity study, and indicates how indications of ability other than tests can be used to measure the same abilities that are measured by tests. See also TM 001 163-64,166 for further information on job elements (J-Scale) procedures. (Author/DLG)
Descriptors: Achievement Rating, Correlation, Evaluation Criteria, Occupational Tests
Peer reviewed Peer reviewed
Smith, Philip L. – Journal of Educational Measurement, 1979
In this study, generalizability theory is used to examine the dependability of student rating data for making judgments about courses and instruction. The importance of giving adequate attention to the specification of the universe of admissible observations in generalizability theory is discussed. (Author/CTM)
Descriptors: Analysis of Variance, Course Evaluation, Definitions, Higher Education