NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hsin-Yun Lee; You-Lin Chen; Li-Jen Weng – Journal of Experimental Education, 2024
The second version of Kaiser's Measure of Sampling Adequacy (MSA[subscript 2]) has been widely applied to assess the factorability of data in psychological research. The MSA[subscript 2] is developed in the population and little is known about its behavior in finite samples. If estimated MSA[subscript 2]s are biased due to sampling errors,…
Descriptors: Error of Measurement, Reliability, Sampling, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Minzi; Zhang, Xian – Language Testing, 2021
This meta-analysis explores the correlation between self-assessment (SA) and language performance. Sixty-seven studies with 97 independent samples involving more than 68,500 participants were included in our analysis. It was found that the overall correlation between SA and language performance was 0.466 (p < 0.01). Moderator analysis was…
Descriptors: Meta Analysis, Self Evaluation (Individuals), Likert Scales, Research Reports
Powers, Sonya; Li, Dongmei; Suh, Hongwook; Harris, Deborah J. – ACT, Inc., 2016
ACT reporting categories and ACT Readiness Ranges are new features added to the ACT score reports starting in fall 2016. For each reporting category, the number correct score, the maximum points possible, the percent correct, and the ACT Readiness Range, along with an indicator of whether the reporting category score falls within the Readiness…
Descriptors: Scores, Classification, College Entrance Examinations, Error of Measurement
PDF pending restoration PDF pending restoration
Hunyh, Hunyh; Saunders, Joseph C. – 1979
Comparisons were made among various methods of estimating the reliability of pass-fail decisions based on mastery tests. The reliability indices that are considered are p, the proportion of agreements between two estimates, and kappa, the proportion of agreements corrected for chance. Estimates of these two indices were made on the basis of…
Descriptors: Cutting Scores, Error of Measurement, Mastery Tests, Reliability
Livingston, Samuel A. – 1976
A distinction is made between reliability of measurement and reliability of classification; the "criterion-referenced reliability coefficient" describes the former. Application of this coefficient to the probability distribution of possible scores for a single student yields a meaningful way to describe the reliability of a single score. (Author)
Descriptors: Classification, Criterion Referenced Tests, Error of Measurement, Measurement
PDF pending restoration PDF pending restoration
Harris, Chester W. – 1971
Livingston's work is a careful analysis of what occurs when one pools two populations with different means, but similar variances and reliability coefficients. However, his work fails to advance reliability theory for the special case of criterion-referenced testing. See ED 042 802 for Livingston's paper. (MS)
Descriptors: Analysis of Variance, Criterion Referenced Tests, Error of Measurement, Reliability
Peer reviewed Peer reviewed
Livingston, Samuel A.; Wingersky, Marilyn A. – Journal of Educational Measurement, 1979
Procedures are described for studying the reliability of decisions based on specific passing scores with tests made up of discrete items and designed to measure continuous rather than categorical traits. These procedures are based on the estimation of the joint distribution of true scores and observed scores. (CTM)
Descriptors: Cutting Scores, Decision Making, Efficiency, Error of Measurement
Peer reviewed Peer reviewed
Whitely, Susan E. – Applied Psychological Measurement, 1979
Two sources of inconsistency were separated by reanalyzing data from a major study on short-term consistency. Little evidence was found for generalizability or behavioral predictability. Results supported the assumption that measurement error from short-term fluctuations is not due to systematic individual differences in response consistency.…
Descriptors: Behavior Change, Cognitive Processes, College Freshmen, Error of Measurement
Olejnik, Stephen F.; Porter, Andrew C. – 1978
The statistical properties of two methods of estimating gain scores for groups in quasi-experiments are compared: (1) gains in scores standardized separately for each group; and (2) analysis of covariance with estimated true pretest scores. The fan spread hypothesis is assumed for groups but not necessarily assumed for members of the groups.…
Descriptors: Academic Achievement, Achievement Gains, Analysis of Covariance, Analysis of Variance