Descriptor
Criterion Referenced Tests | 7 |
Error of Measurement | 7 |
Mathematical Models | 7 |
Test Reliability | 7 |
Cutting Scores | 4 |
Norm Referenced Tests | 4 |
Statistical Analysis | 3 |
Test Interpretation | 3 |
Test Items | 3 |
Career Development | 2 |
Comparative Analysis | 2 |
More ▼ |
Source
Psychometrika | 1 |
Author
Brennan, Robert L. | 2 |
Divgi, D. R. | 1 |
Haladyna, Tom | 1 |
Kane, Michael T. | 1 |
Marshall, J. Laird | 1 |
Roid, Gale | 1 |
Scheetz, James P. | 1 |
van der Linden, Wim J. | 1 |
vonFraunhofer, J. Anthony | 1 |
Publication Type
Reports - Research | 6 |
Speeches/Meeting Papers | 3 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Divgi, D. R. – 1978
One aim of criterion-referenced testing is to classify an examinee without reference to a norm group; therefore, any statements about the dependability of such classification ought to be group-independent also. A population-independent index is proposed in terms of the probability of incorrect classification near the cutoff true score. The…
Descriptors: Criterion Referenced Tests, Cutting Scores, Difficulty Level, Error of Measurement

Brennan, Robert L.; Kane, Michael T. – Psychometrika, 1977
Using the assumption of randomly parallel tests and concepts from generalizability theory, three signal/noise ratios for domain-referenced tests are developed, discussed, and compared. The three ratios have the same noise but different signals depending upon the kind of decision to be made as a result of measurement. (Author/JKS)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Error of Measurement, Mathematical Models
Scheetz, James P.; vonFraunhofer, J. Anthony – 1980
Subkoviak suggested a technique for estimating both group reliability and the reliability associated with assigning a given individual to a mastery or non-mastery category based on a single test administration. Two assumptions underlie this model. First, it is assumed that had successive test administrations occurred, scores for each individual…
Descriptors: Criterion Referenced Tests, Cutting Scores, Error of Measurement, Higher Education
van der Linden, Wim J. – 1982
A latent trait method is presented to investigate the possibility that Angoff or Nedelsky judges specify inconsistent probabilities in standard setting techniques for objectives-based instructional programs. It is suggested that judges frequently specify a low probability of success for an easy item but a large probability for a hard item. The…
Descriptors: Criterion Referenced Tests, Cutting Scores, Error of Measurement, Interrater Reliability
Marshall, J. Laird – 1976
A summary is provided of the rationale for questioning the applicability of classical reliability measures to criterion referenced tests; an extension of the classical theory of true and error scores to incorporate a theory of dichotomous decisions; a presentation of the mean split-half coefficient of agreement, a single-administration test index…
Descriptors: Career Development, Computer Programs, Criterion Referenced Tests, Decision Making

Brennan, Robert L. – 1979
Using the basic principles of generalizability theory, a psychometric model for domain-referenced interpretations is proposed, discussed, and illustrated. This approach, assuming an analysis of variance or linear model, is applicable to numerous data collection designs, including the traditional persons-crossed-with-items design, which is treated…
Descriptors: Analysis of Variance, Cost Effectiveness, Criterion Referenced Tests, Cutting Scores
Haladyna, Tom; Roid, Gale – 1976
Three approaches to the construction of achievement tests are compared: construct, operational, and empirical. The construct approach is based upon classical test theory and measures an abstract representation of the instructional objectives. The operational approach specifies instructional intent through instructional objectives, facet design,…
Descriptors: Academic Achievement, Achievement Tests, Career Development, Comparative Analysis