Publication Date
In 2025 | 2 |
Since 2024 | 9 |
Since 2021 (last 5 years) | 18 |
Since 2016 (last 10 years) | 38 |
Since 2006 (last 20 years) | 92 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Practitioners | 39 |
Researchers | 35 |
Teachers | 20 |
Administrators | 6 |
Policymakers | 3 |
Students | 3 |
Media Staff | 1 |
Parents | 1 |
Location
Connecticut | 16 |
California | 13 |
Australia | 8 |
New Jersey | 6 |
Texas | 5 |
Florida | 4 |
Michigan | 4 |
Nebraska | 4 |
Pennsylvania | 4 |
Alabama | 3 |
Illinois | 3 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 2 |
Education Consolidation… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 1 |
Meets WWC Standards with or without Reservations | 1 |
Does not meet standards | 1 |

Stahlbrand, Kristina; And Others – Action in Teacher Education, 1981
A Minimum Objective System representing the critical skills for success in school and community includes: (1) math skills written in behavioral objective format; (2) mastery tests to measure student progress and retention; (3) a management system; and (4) curriculum materials. (JN)
Descriptors: Behavioral Objectives, Curriculum Development, Educational Administration, Long Range Planning

van der Linden, Wim J. – Psychometrika, 1981
Decision rules for assigning students to treatments based upon aptitudes or criterion scores are discussed. Popular procedures are criticized and a Bayesian approach is recommended. The effect of unreliability of aptitude or criterion scores is also discussed. (JKS)
Descriptors: Aptitude Treatment Interaction, Criterion Referenced Tests, Cutting Scores, Decision Making

Wilcox, Rand R. – Psychometrika, 1979
The problem of determining an optimal passing score for a mastery test is discussed, when the purpose of the test is to predict success on an external criterion. For the case of constant losses for the two possible error types, a method for determining passing scores is derived. (Author/JKS)
Descriptors: Criterion Referenced Tests, Cutting Scores, Mastery Tests, Mathematical Models

Cahan, Sorel; Cohen, Nora – Educational and Psychological Measurement, 1990
A solution is offered to problems associated with the inequality in the manipulability of probabilities of classification errors of masters versus nonmasters, based on competency test results. Eschewing the typical arbitrary establishment of observed-score standards below 100 percent, the solution incorporates a self-correction of wrong answers.…
Descriptors: Classification, Error of Measurement, Mastery Tests, Minimum Competency Testing

Du, Yi; And Others – Applied Measurement in Education, 1993
A new computerized mastery test is described that builds on the Lewis and Sheehan procedure (sequential testlets) (1990), but uses fuzzy set decision theory to determine stopping rules and the Rasch model to calibrate items and estimate abilities. Differences between fuzzy set and Bayesian methods are illustrated through an example. (SLD)
Descriptors: Bayesian Statistics, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
McCarthy, Jean – 1987
The fundamental purposes of this study were to develop mastery tests in the cognitive and psychomotor domains for skin and scuba diving and to establish validity and reliability for the tests. A table of specifications was developed for each domain, and a pilot study refined the initial test batteries into their final form. In the main study,…
Descriptors: Cutting Scores, Higher Education, Knowledge Level, Mastery Tests
Wilcox, Rand R. – 1978
A mastery test is frequently described as follows: an examinee responds to n dichotomously scored test items. Depending upon the examinee's observed (number correct) score, a mastery decision is made and the examinee is advanced to the next level of instruction. Otherwise, a nonmastery decision is made and the examinee is given remedial work. This…
Descriptors: Comparative Analysis, Cutting Scores, Factor Analysis, Mastery Tests

Huynh, Huynh; Mandeville, Garrett K. – 1979
Assuming that the density p of the true ability theta in the binomial test score model is continuous in the closed interval (0, 1), a Bernstein polynomial can be used to uniformly approximate p. Then via quadratic programming techniques, least-square estimates may be obtained for the coefficients defining the polynomial. The approximation, in turn…
Descriptors: Cutting Scores, Error of Measurement, Least Squares Statistics, Mastery Tests
McLarty, Joyce R. – 1979
The problem of establishing appropriate passing scores is one of evaluation rather than estimation and not amenable to exact solution. It must therefore be approached by (1) identifying criteria for judging the acceptability of the passing score, (2) collecting the data appropriate to assessing each relevant criterion, and (3) judging how well the…
Descriptors: Criterion Referenced Tests, Cutting Scores, Decision Making, Evaluation Criteria
Johnson, Dale D.; And Others – 1980
The work reported culminates research by the Project on the Assessment and Analysis of Word Identification Skills in Reading. The Word Identification Test battery was designed for elementary school children, with attention to the major issues pertaining to skills mastery and assessment that are raised in the review of mastery learning. Five…
Descriptors: Elementary Education, Language Skills, Mastery Learning, Mastery Tests

Mathews, John J.; And Others – 1978
To assess the reading ability of military service applicants and accessions, and to determine the relationship between Armed Services Vocational Aptitude Battery (ASVAB) measures and reading scores, 2,432 subjects were given the Gates MacGinitie reading test and a subsample of 818 were given the Nelson-Denny reading test. The median reading grade…
Descriptors: Mastery Tests, Military Personnel, Postsecondary Education, Predictive Measurement

Sheehan, Daniel S.; Davis, Robbie G. – School Science and Mathematics, 1979
The steps discussed are (a) stating the purpose of the battery, (b) specifying performance objectives, (c) generating an item pool, (d) item analysis, (e) item selection, (f) determining cut-off scores, and (g) validating the battery. (MP)
Descriptors: Criterion Referenced Tests, Elementary Secondary Education, Item Analysis, Mastery Tests

Scriven, Michael – Journal of Educational Measurement, 1978
The utility of setting standards for educational decisions, even though those standards may be somewhat arbitrary, is defended in this response to Glass's article (TM 504 031). (JKS)
Descriptors: Academic Standards, Criterion Referenced Tests, Cutting Scores, Decision Making

Wilcox, Rand R. – Journal of Educational Statistics, 1977
False-positive and false-negative decisions are the two possible errors committed with a mastery test; yet the estimation of the likelihood of committing these errors has not been investigated. Two methods of this type of estimation are presented and discussed. (Author/JKS)
Descriptors: Bayesian Statistics, Hypothesis Testing, Mastery Tests, Measurement Techniques

Urzillo, Robert L. – Contemporary Education, 1987
The popular educational reform trend toward competency testing is appropriate to measure achievement in the basic skills areas, but competency testing in core curriculum areas may cause teachers to teach for the test, lead to minimal standards at the expense of excellence, and stifle the transfer of learning and creativity. (CB)
Descriptors: Academic Standards, Competency Based Education, Core Curriculum, Educational Change