NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1988
Current methods for obtaining reliability indices for mastery tests can be laborious. This paper offers practitioners tables from which agreement and kappa coefficients can be read directly and provides criterion for acceptable values of agreement and kappa coefficients. (TJH)
Descriptors: Mastery Tests, Statistical Analysis, Test Reliability, Testing
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1979
The classical estimate of a binomial probability function is to estimate its mean in the usual manner and to substitute the results in the appropriate expression. Two alternative estimation procedures are described and examined. Emphasis is given to the single administration estimate of the mastery test reliability. (Author/CTM)
Descriptors: Cutting Scores, Mastery Tests, Probability, Scores
Peer reviewed Peer reviewed
Peng, Chao-Ying, J.; Subkoviak, Michael J. – Journal of Educational Measurement, 1980
Huynh (1976) suggested a method of approximating the reliability coefficient of a mastery test. The present study examines the accuracy of Huynh's approximation and also describes a computationally simpler approximation which appears to be generally more accurate than the former. (Author/RL)
Descriptors: Error of Measurement, Mastery Tests, Mathematical Models, Statistical Analysis
Subkoviak, Michael J. – 1976
A number of different definitions and indices of reliability for mastery tests have recently been proposed in an attempt to cope with possible lack of score variability that attenuates traditional coefficients. One promising index that has been suggested is the proportion of students in a group that are consistently assigned to the same mastery…
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Scores
PDF pending restoration PDF pending restoration
Huynh, Huynh; Mandeville, Garrett K. – 1979
Assuming that the density p of the true ability theta in the binomial test score model is continuous in the closed interval (0, 1), a Bernstein polynomial can be used to uniformly approximate p. Then via quadratic programming techniques, least-square estimates may be obtained for the coefficients defining the polynomial. The approximation, in turn…
Descriptors: Cutting Scores, Error of Measurement, Least Squares Statistics, Mastery Tests
Peer reviewed Peer reviewed
Huynh, Huynh; Saunders, Joseph C. – Journal of Educational Measurement, 1980
Single administration (beta-binomial) estimates for the raw agreement index p and the corrected-for-chance kappa index in mastery testing are compared with those based on two test administrations in terms of estimation bias and sampling variability. Bias is about 2.5 percent for p and 10 percent for kappa. (Author/RL)
Descriptors: Comparative Analysis, Error of Measurement, Mastery Tests, Mathematical Models
Kane, Michael T.; Brennan, Robert L. – 1977
A large number of seemingly diverse coefficients have been proposed as indices of dependability, or reliability, for domain-referenced and/or mastery tests. In this paper, it is shown that most of these indices are special cases of two generalized indices of agreement: one that is corrected for chance, and one that is not. The special cases of…
Descriptors: Bayesian Statistics, Correlation, Criterion Referenced Tests, Cutting Scores
Besel, Ronald – 1971
The Mastery-Learning test model is extended. Methods for estimating prior probabilities are described. The use of an adjustment matrix to transform a probability of mastery measure and empirical methods for estimating adjustment matrix parameters are derived. Adjustment matrices are interpreted as indicators of instructional effectiveness and as…
Descriptors: Criterion Referenced Tests, Decision Making, Groups, Individual Testing
Subkoviak, Michael J. – 1977
Four different procedures were used for estimating the proportion of persons who would be classified consistently as either passing both of two parallel tests or failing both. These four methods were applied at each of four different mastery level scores for each of three different length tests. Data were based on 50 replications of each procedure…
Descriptors: Criterion Referenced Tests, Cutting Scores, Data Analysis, Data Collection
PDF pending restoration PDF pending restoration
Brennan, Robert L. – 1979
Using the basic principles of generalizability theory, a psychometric model for domain-referenced interpretations is proposed, discussed, and illustrated. This approach, assuming an analysis of variance or linear model, is applicable to numerous data collection designs, including the traditional persons-crossed-with-items design, which is treated…
Descriptors: Analysis of Variance, Cost Effectiveness, Criterion Referenced Tests, Cutting Scores
Bormuth, John R. – 1978
The feasibility of criterion referenced testing is held to be dependent on the tenability of two postulates: (1) that bias can be controlled in a principled manner from one test to the next; and (2) that one mental process measured by such tests may lawfully interact with another. Without the first postulate, criterion scores could not be…
Descriptors: Achievement Tests, Career Development, Criterion Referenced Tests, Cutting Scores
Bormuth, John R. – 1979
A procedure is demonstrated for constructing tables showing, for each score on a commercial reading achievement test, the percentage of real-world materials that the testee is likely to comprehend with at least a criterion level of proficiency, the percentages of students in a local or national sample who can competently comprehend a given…
Descriptors: Criterion Referenced Tests, Elementary Secondary Education, Equivalency Tests, Expectancy Tables