NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1988
Current methods for obtaining reliability indices for mastery tests can be laborious. This paper offers practitioners tables from which agreement and kappa coefficients can be read directly and provides criterion for acceptable values of agreement and kappa coefficients. (TJH)
Descriptors: Mastery Tests, Statistical Analysis, Test Reliability, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Henson, Robert; Templin, Jonathan; Douglas, Jeffrey – Journal of Educational Measurement, 2007
Consider test data, a specified set of dichotomous skills measured by the test, and an IRT cognitive diagnosis model (ICDM). Statistical estimation of the data set using the ICDM can provide examinee estimates of mastery for these skills, referred to generally as attributes. With such detailed information about each examinee, future instruction…
Descriptors: Simulation, Teaching Methods, Testing, Diagnostic Tests
Peer reviewed Peer reviewed
Brennan, Robert L.; Kane, Michael T. – Journal of Educational Measurement, 1977
An index for the dependability of mastery tests is described. Assumptions necessary for the index and the mathematical development of the index are provided. (Author/JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Test Reliability
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1976
A number of different reliability coefficients have recently been proposed for tests used to differentiate between groups such as masters and nonmasters. One promising index is the proportion of students in a class that are consistently assigned to the same mastery group across two testings. The present paper proposes a single test administration…
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Probability
Peer reviewed Peer reviewed
Peng, Chao-Ying, J.; Subkoviak, Michael J. – Journal of Educational Measurement, 1980
Huynh (1976) suggested a method of approximating the reliability coefficient of a mastery test. The present study examines the accuracy of Huynh's approximation and also describes a computationally simpler approximation which appears to be generally more accurate than the former. (Author/RL)
Descriptors: Error of Measurement, Mastery Tests, Mathematical Models, Statistical Analysis
Peer reviewed Peer reviewed
Algina, James; Noe, Michael J. – Journal of Educational Measurement, 1978
A computer simulation study was conducted to investigate Subkoviak's index of reliability for criterion-referenced tests, called the coefficient of agreement. Results indicate that the index can be adequately estimated. (JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Measurement, Test Reliability
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1978
Four different methods for estimating the proportions of testees properly classified as having mastered or not mastered test content are examined, using data from the Scholastic Aptitude Test. All four methods prove reasonably accurate and all show some bias under certain conditions. (JKS)
Descriptors: Bias, Criterion Referenced Tests, Mastery Tests, Measurement
Peer reviewed Peer reviewed
Tillman, Murray H. – Journal of Educational Measurement, 1974
Two testing packets, Formative Exercises T-TE-15A and T-TE-15B are reviewed. The Exercises are based on Bloom's concept of learning for mastery and are designed to acquaint teachers with the principles of mastery learning and provide examples of formative evaluation. One form of the exercises provides instant feedback to the examinee; the other,…
Descriptors: Feedback, Formative Evaluation, Mastery Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Haladyna, Thomas Michael – Journal of Educational Measurement, 1974
Classical test construction and analysis procedures are applicable and appropriate for use with criterion referenced tests when samples of both mastery and nonmastery examinees are employed. (Author/BB)
Descriptors: Criterion Referenced Tests, Item Analysis, Mastery Tests, Test Construction
Peer reviewed Peer reviewed
Popham, W. James – Journal of Educational Measurement, 1978
A defense of the use of standards with criterion-referenced testing is made in response to Glass's article (TM 504 031). (JKS)
Descriptors: Academic Standards, Criterion Referenced Tests, Evaluation Criteria, Mastery Tests
Peer reviewed Peer reviewed
Hartke, Alan R. – Journal of Educational Measurement, 1978
Latent partition analysis is shown to be useful in determining the conceptual homogeneity of an item population. Such item populations are useful for mastery testing. Applications of latent partition analysis in assessing content validity are suggested. (Author/JKS)
Descriptors: Higher Education, Item Analysis, Item Sampling, Mastery Tests
Peer reviewed Peer reviewed
Block, James H. – Journal of Educational Measurement, 1978
The use of setting standards for criterion-referenced tests is defended in a response to two papers by Gene Glass and Nancy Burton. (JKS)
Descriptors: Academic Standards, Criterion Referenced Tests, Cutting Scores, Decision Making
Peer reviewed Peer reviewed
Wilcox, Rand R.; Harris, Chester W. – Journal of Educational Measurement, 1977
Emrick's proposed method for determining a mastery level cut-off score is questioned. Emrick's method is shown to be useful only in limited situations. (JKS)
Descriptors: Correlation, Cutting Scores, Mastery Tests, Mathematical Models
Peer reviewed Peer reviewed
Lord, Frederic M. – Journal of Educational Measurement, 1977
A variety of practical applications of item characteristic curve test theory are discussed. Among these applications are tailored testing, two stage testing, determining whether two tests measure the same latent trait, and measuring item bias towards minority or other groups. (Author/JKS)
Descriptors: Computer Programs, Latent Trait Theory, Mastery Tests, Measurement
Peer reviewed Peer reviewed
Scriven, Michael – Journal of Educational Measurement, 1978
The utility of setting standards for educational decisions, even though those standards may be somewhat arbitrary, is defended in this response to Glass's article (TM 504 031). (JKS)
Descriptors: Academic Standards, Criterion Referenced Tests, Cutting Scores, Decision Making
Previous Page | Next Page ยป
Pages: 1  |  2