Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 7 |
Descriptor
Test Interpretation | 10 |
Test Use | 10 |
Scores | 7 |
Validity | 7 |
Theories | 4 |
Evidence | 3 |
Generalization | 3 |
Inferences | 3 |
Test Results | 3 |
Academic Achievement | 2 |
Beliefs | 2 |
More ▼ |
Source
Journal of Educational… | 10 |
Author
Kane, Michael T. | 2 |
Borsboom, Denny | 1 |
Brennan, Robert L. | 1 |
Frisbie, David A. | 1 |
Harnisch, Delwyn L. | 1 |
Lord, Frederic M. | 1 |
Markus, Keith A. | 1 |
Moss, Pamela A. | 1 |
Newton, Paul E. | 1 |
Sireci, Stephen G. | 1 |
Publication Type
Journal Articles | 10 |
Opinion Papers | 7 |
Book/Product Reviews | 1 |
Guides - Non-Classroom | 1 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Reports - Research | 1 |
Education Level
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Newton, Paul E. – Journal of Educational Measurement, 2013
Kane distinguishes between two kinds of argument: the interpretation/use argument and the validity argument. This commentary considers whether there really are two kinds of argument, two arguments, or just one. It concludes that there is just one argument: the validity argument. (Contains 2 figures and 5 notes.)
Descriptors: Validity, Test Interpretation, Test Use
Sireci, Stephen G. – Journal of Educational Measurement, 2013
Kane (this issue) presents a comprehensive review of validity theory and reminds us that the focus of validation is on test score interpretations and use. In reacting to his article, I support the argument-based approach to validity and all of the major points regarding validation made by Dr. Kane. In addition, I call for a simpler, three-step…
Descriptors: Validity, Theories, Test Interpretation, Test Use
Borsboom, Denny; Markus, Keith A. – Journal of Educational Measurement, 2013
According to Kane (this issue), "the validity of a proposed interpretation or use depends on how well the evidence supports" the claims being made. Because truth and evidence are distinct, this means that the validity of a test score interpretation could be high even though the interpretation is false. As an illustration, we discuss the case of…
Descriptors: Evidence, Ethics, Validity, Theories
Brennan, Robert L. – Journal of Educational Measurement, 2013
Kane's paper "Validating the Interpretations and Uses of Test Scores" is the most complete and clearest discussion yet available of the argument-based approach to validation. At its most basic level, validation as formulated by Kane is fundamentally a simply-stated two-step enterprise: (1) specify the claims inherent in a particular interpretation…
Descriptors: Validity, Test Interpretation, Test Use, Scores
Kane, Michael T. – Journal of Educational Measurement, 2013
This response to the comments contains three main sections, each addressing a subset of the comments. In the first section, I will respond to the comments by Brennan, Haertel, and Moss. All of these comments suggest ways in which my presentation could be extended or improved; I generally agree with their suggestions, so my response to their…
Descriptors: Validity, Test Interpretation, Test Use, Scores
Moss, Pamela A. – Journal of Educational Measurement, 2013
Studies of data use illuminate ways in which education professionals have used test scores and other evidence relevant to students' learning--in action in their own contexts of work--to make decisions about their practice. These studies raise instructive challenges for a validity theory that focuses on intended interpretations and uses of test…
Descriptors: Validity, Test Use, Test Interpretation, Scores
Kane, Michael T. – Journal of Educational Measurement, 2013
To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based…
Descriptors: Test Interpretation, Validity, Scores, Test Use

Harnisch, Delwyn L. – Journal of Educational Measurement, 1983
The Student-Problem (S-P) methodology is described using an example of 24 students on a test of 44 items. Information based on the students' test score and the modified caution index is put to diagnostic use. A modification of the S-P methodology is applied to domain-referenced testing. (Author/CM)
Descriptors: Academic Achievement, Educational Practices, Item Analysis, Responses

Lord, Frederic M. – Journal of Educational Measurement, 1984
Four methods are outlined for estimating or approximating from a single test administration the standard error of measurement of number-right test score at specified ability levels or cutting scores. The methods are illustrated and compared on one set of real test data. (Author)
Descriptors: Academic Ability, Cutting Scores, Error of Measurement, Scoring Formulas

Frisbie, David A. – Journal of Educational Measurement, 1992
This guide for school administrators is written to promote careful and wise use of scores from standardized achievement tests. Authors of two sections particularly criticized in the review respond about what should be included in a primer on testing and interpreting test scores for compensatory education students. (SLD)
Descriptors: Achievement Tests, Administrator Role, Compensatory Education, Educational Assessment