Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Mastery Tests | 48 |
Test Items | 48 |
Test Construction | 20 |
Criterion Referenced Tests | 16 |
Item Analysis | 15 |
Cutting Scores | 12 |
Test Reliability | 11 |
Difficulty Level | 10 |
Test Length | 10 |
Test Validity | 10 |
Comparative Analysis | 9 |
More ▼ |
Source
Author
Glas, Cees A. W. | 2 |
Hambleton, Ronald K. | 2 |
Harlan, Hugh A. | 2 |
Huynh, Huynh | 2 |
Phillips, Gary W. | 2 |
Saunders, Joseph C. | 2 |
Vos, Hans J. | 2 |
Wilcox, Rand R. | 2 |
Bauer, Ernest A. | 1 |
Beard, Jacob G. | 1 |
Bennett, Judith A. | 1 |
More ▼ |
Publication Type
Education Level
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 1 | 1 |
Grade 2 | 1 |
Grade 3 | 1 |
Higher Education | 1 |
Audience
Researchers | 8 |
Practitioners | 1 |
Location
Argentina | 1 |
Michigan | 1 |
Nebraska | 1 |
South Carolina | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Comprehensive Tests of Basic… | 3 |
Metropolitan Achievement Tests | 2 |
Texas Educational Assessment… | 1 |
Woodcock Reading Mastery Test | 1 |
What Works Clearinghouse Rating
Giuliodori, Mauricio J.; Lujan, Heidi L.; DiCarlo, Stephen E. – Advances in Physiology Education, 2009
We used collaborative testing in a veterinary physiology course (65 students) to answer the following questions: 1) do students with individual correct responses or students with individual incorrect responses change their answers during group testing? and 2) do high-performing students make the decisions, that is, are low-performing students…
Descriptors: Feedback (Response), Group Testing, Mastery Tests, Physiology
Wiberg, Marie – International Journal of Testing, 2006
A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…
Descriptors: Test Length, Computer Simulation, Mastery Tests, Item Response Theory

Hicks, Marilyn Maginley – Multivariate Behavioral Research, 1981
An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)
Descriptors: Correlation, Factor Analysis, Mastery Tests, Measurement Techniques

Wasik, John L. – Educational and Psychological Measurement, 1979
A computer program to generate individualized objective test forms for use in a Student Faced Statistics (SPS) course is described. The program features disproportionate sampling from different item domains and enhanced character generation facility for test printing purposes. (Author)
Descriptors: Computer Programs, Individualized Instruction, Item Sampling, Mastery Learning

Hartke, Alan R. – Journal of Educational Measurement, 1978
Latent partition analysis is shown to be useful in determining the conceptual homogeneity of an item population. Such item populations are useful for mastery testing. Applications of latent partition analysis in assessing content validity are suggested. (Author/JKS)
Descriptors: Higher Education, Item Analysis, Item Sampling, Mastery Tests
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests
Luecht, Richard M. – 2003
This paper presents a multistage adaptive testing test development paradigm that promises to handle content balancing and other test development needs, psychometric reliability concerns, and item exposure. The bundled multistage adaptive testing (BMAT) framework is a modification of the computer-adaptive sequential testing framework introduced by…
Descriptors: Adaptive Testing, Computer Assisted Testing, High Stakes Tests, Mastery Tests
Glas, Cees A. W.; Vos, Hans J. – 2000
This paper focuses on a version of sequential mastery testing (i.e., classifying students as a master/nonmaster or continuing testing and administering another item or testlet) in which response behavior is modeled by a multidimensional item response theory (IRT) model. First, a general theoretical framework is outlined that is based on a…
Descriptors: Adaptive Testing, Bayesian Statistics, Classification, Computer Assisted Testing

Wilcox, Rand R. – Applied Psychological Measurement, 1980
This paper discusses how certain recent technical advances might be extended to examine proficiency tests which are conceptualized as representing a variety of skills with one or more items per skill. In contrast to previous analyses, errors in the item level are included. (Author/BW)
Descriptors: Mastery Tests, Minimum Competencies, Minimum Competency Testing, Sampling

Huynh, Huynh; Saunders, Joseph C. – Journal of Educational Measurement, 1980
Single administration (beta-binomial) estimates for the raw agreement index p and the corrected-for-chance kappa index in mastery testing are compared with those based on two test administrations in terms of estimation bias and sampling variability. Bias is about 2.5 percent for p and 10 percent for kappa. (Author/RL)
Descriptors: Comparative Analysis, Error of Measurement, Mastery Tests, Mathematical Models
Subkoviak, Michael J.; Harris, Deborah J. – 1984
This study examined three statistical methods for selecting items for mastery tests. One is the pretest-posttest method due to Cox and Vargas (1966); it is computationally simple, but has a number of serious limitations. The second is a latent trait method recommended by van der Linden (1981); it is computationally complex, but has a number of…
Descriptors: Comparative Analysis, Elementary Secondary Education, Item Analysis, Latent Trait Theory

Hanna, Gerald S.; Bennett, Judith A. – Educational and Psychological Measurement, 1984
The presently viewed role and utility of measures of instructional sensitivity are summarized. A case is made that the rationale for the assessment of instructional sensitivity can be applied to all achievement tests and should not be restricted to criterion-referenced mastery tests. (Author/BW)
Descriptors: Achievement Tests, Context Effect, Criterion Referenced Tests, Mastery Tests
Graham, Darol L. – 1974
The adequacy of a test developed for statewide assessment of basic mathematics skills was investigated. The test, comprised of multiple-choice items reflecting a series of behavioral objectives, was compared with a more extensive criterion measure generated from the same objectives by the application of a strict item sampling model. In many…
Descriptors: Comparative Testing, Criterion Referenced Tests, Educational Assessment, Item Sampling

Van der Linden, Wim J. – Journal of Educational Measurement, 1982
An ignored aspect of standard setting, namely the possibility that Angoff or Nedelsky judges specify inconsistent probabilities (e.g., low probabilities for easy items but large probabilities for hard items) is explored. A latent trait method is proposed to estimate such misspecifications, and an index of consistency is defined. (Author/PN)
Descriptors: Cutting Scores, Latent Trait Theory, Mastery Tests, Mathematical Models
deGruijter, Dato N. M. – 1980
The setting of standards involves subjective value judgments. The inherent arbitrariness of specific standards has been severely criticized by Glass. His antagonists agree that standard setting is a judgmental task but they have pointed out that arbitrariness in the positive sense of serious judgmental decisions is unavoidable. Further, small…
Descriptors: Cutting Scores, Difficulty Level, Error of Measurement, Mastery Tests