Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Source
Educational and Psychological… | 3 |
Educational Measurement:… | 2 |
Journal of Educational… | 2 |
Applied Measurement in… | 1 |
Applied Psychological… | 1 |
College Board | 1 |
Psychometrika | 1 |
Author
Brennan, Robert L. | 16 |
Kane, Michael T. | 3 |
Lee, Won-Chan | 2 |
Johnson, Eugene G. | 1 |
Kane, Michael F. | 1 |
Kim, Stella Y. | 1 |
Lee, Eunjung | 1 |
Lockwood, Robert E. | 1 |
Prediger, Dale J. | 1 |
Publication Type
Reports - Research | 10 |
Journal Articles | 7 |
Reports - Evaluative | 3 |
Speeches/Meeting Papers | 2 |
Numerical/Quantitative Data | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Brennan, Robert L.; Kim, Stella Y.; Lee, Won-Chan – Educational and Psychological Measurement, 2022
This article extends multivariate generalizability theory (MGT) to tests with different random-effects designs for each level of a fixed facet. There are numerous situations in which the design of a test and the resulting data structure are not definable by a single design. One example is mixed-format tests that are composed of multiple-choice and…
Descriptors: Multivariate Analysis, Generalizability Theory, Multiple Choice Tests, Test Construction
Lee, Eunjung; Lee, Won-Chan; Brennan, Robert L. – College Board, 2012
In almost all high-stakes testing programs, test equating is necessary to ensure that test scores across multiple test administrations are equivalent and can be used interchangeably. Test equating becomes even more challenging in mixed-format tests, such as Advanced Placement Program® (AP®) Exams, that contain both multiple-choice and constructed…
Descriptors: Test Construction, Test Interpretation, Test Norms, Test Reliability
Brennan, Robert L. – Applied Measurement in Education, 2011
Broadly conceived, reliability involves quantifying the consistencies and inconsistencies in observed scores. Generalizability theory, or G theory, is particularly well suited to addressing such matters in that it enables an investigator to quantify and distinguish the sources of inconsistencies in observed scores that arise, or could arise, over…
Descriptors: Generalizability Theory, Test Theory, Test Reliability, Item Response Theory

Brennan, Robert L.; Kane, Michael T. – Journal of Educational Measurement, 1977
An index for the dependability of mastery tests is described. Assumptions necessary for the index and the mathematical development of the index are provided. (Author/JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Test Reliability

Brennan, Robert L.; Prediger, Dale J. – Educational and Psychological Measurement, 1981
This paper considers some appropriate and inappropriate uses of coefficient kappa and alternative kappa-like statistics. Discussion is restricted to the descriptive characteristics of these statistics for measuring agreement with categorical data in studies of reliability and validity. (Author)
Descriptors: Classification, Error of Measurement, Mathematical Models, Test Reliability

Brennan, Robert L.; Lockwood, Robert E. – Applied Psychological Measurement, 1980
Generalizability theory is used to characterize and quantify expected variance in cutting scores and to compare the Nedelsky and Angoff procedures for establishing a cutting score. Results suggest that the restricted nature of the Nedelsky (inferred) probability scale may limit its applicability in certain contexts. (Author/BW)
Descriptors: Cutting Scores, Generalization, Statistical Analysis, Test Reliability

Brennan, Robert L. – Educational and Psychological Measurement, 1975
Variance components from split-plot factorial design (SPF) were used to estimate reliability for schools and persons within schools. Reliability for persons within SPF and randomized block design (RB) schools were compared and reliability for SPF and RB design schools were compared. (Author/BJG)
Descriptors: Analysis of Variance, Evaluation Methods, Schools, Statistical Analysis

Brennan, Robert L. – Journal of Educational Measurement, 1995
Generalizability theory is used to show that the assumption that reliability for groups is greater than that for persons (and that error variance for groups is less than that for persons) is not necessarily true. Examples are provided from course evaluation and performance test literature. (SLD)
Descriptors: Course Evaluation, Decision Making, Equations (Mathematics), Generalizability Theory
Brennan, Robert L. – 1974
An attempt is made to explore the use of subjective probabilities in the analysis of item data, especially criterion-referenced item data. Two assumptions are implicit: (1) one wants to obtain a maximum amount of information with respect to an item using a minimum number of subjects; and (2) once the item is validated, it may well be administered…
Descriptors: Confidence Testing, Criterion Referenced Tests, Guessing (Tests), Item Analysis

Brennan, Robert L.; Kane, Michael T. – Psychometrika, 1977
Using the assumption of randomly parallel tests and concepts from generalizability theory, three signal/noise ratios for domain-referenced tests are developed, discussed, and compared. The three ratios have the same noise but different signals depending upon the kind of decision to be made as a result of measurement. (Author/JKS)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Error of Measurement, Mathematical Models
Brennan, Robert L.; Kane, Michael F. – 1975
When classes are the units of analyses, estimates of the reliability of class means are needed. Using classical test theory it is difficult to treat this problem adequately. Generalizability theory, however, provides a natural framework for dealing with the problem. Each of four possible formulas for the generalizability of class means is derived…
Descriptors: Analysis of Variance, Classes (Groups of Students), Correlation, Error Patterns

Brennan, Robert L.; Johnson, Eugene G. – Educational Measurement: Issues and Practice, 1995
The application of generalizability theory to the reliability and error variance estimation for performance assessment scores is discussed. Decision makers concerned with performance assessment need to realize the restrictions that limit generalizability such as limitations that lead to reductions in the number of tasks possible, rater quality,…
Descriptors: Decision Making, Educational Assessment, Error of Measurement, Estimation (Mathematics)
Kane, Michael T.; Brennan, Robert L. – 1977
A large number of seemingly diverse coefficients have been proposed as indices of dependability, or reliability, for domain-referenced and/or mastery tests. In this paper, it is shown that most of these indices are special cases of two generalized indices of agreement: one that is corrected for chance, and one that is not. The special cases of…
Descriptors: Bayesian Statistics, Correlation, Criterion Referenced Tests, Cutting Scores

Brennan, Robert L. – Educational Measurement: Issues and Practice, 1998
Explores the relationship between measurement theory and practice, considering five broad categories of: (1) models, assumptions, and terminology; (2) reliability; (3) validity; (4) scaling; and (5) setting performance standards. It must be recognized that measurement is not an end in itself. (SLD)
Descriptors: Educational Assessment, Educational Practices, Measurement Techniques, Models

Brennan, Robert L. – 1979
Using the basic principles of generalizability theory, a psychometric model for domain-referenced interpretations is proposed, discussed, and illustrated. This approach, assuming an analysis of variance or linear model, is applicable to numerous data collection designs, including the traditional persons-crossed-with-items design, which is treated…
Descriptors: Analysis of Variance, Cost Effectiveness, Criterion Referenced Tests, Cutting Scores
Previous Page | Next Page »
Pages: 1 | 2