NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Journal of Educational Measurement, 2013
Kane distinguishes between two kinds of argument: the interpretation/use argument and the validity argument. This commentary considers whether there really are two kinds of argument, two arguments, or just one. It concludes that there is just one argument: the validity argument. (Contains 2 figures and 5 notes.)
Descriptors: Validity, Test Interpretation, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Sireci, Stephen G. – Journal of Educational Measurement, 2013
Kane (this issue) presents a comprehensive review of validity theory and reminds us that the focus of validation is on test score interpretations and use. In reacting to his article, I support the argument-based approach to validity and all of the major points regarding validation made by Dr. Kane. In addition, I call for a simpler, three-step…
Descriptors: Validity, Theories, Test Interpretation, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Borsboom, Denny; Markus, Keith A. – Journal of Educational Measurement, 2013
According to Kane (this issue), "the validity of a proposed interpretation or use depends on how well the evidence supports" the claims being made. Because truth and evidence are distinct, this means that the validity of a test score interpretation could be high even though the interpretation is false. As an illustration, we discuss the case of…
Descriptors: Evidence, Ethics, Validity, Theories
Peer reviewed Peer reviewed
Direct linkDirect link
Brennan, Robert L. – Journal of Educational Measurement, 2013
Kane's paper "Validating the Interpretations and Uses of Test Scores" is the most complete and clearest discussion yet available of the argument-based approach to validation. At its most basic level, validation as formulated by Kane is fundamentally a simply-stated two-step enterprise: (1) specify the claims inherent in a particular interpretation…
Descriptors: Validity, Test Interpretation, Test Use, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael T. – Journal of Educational Measurement, 2013
This response to the comments contains three main sections, each addressing a subset of the comments. In the first section, I will respond to the comments by Brennan, Haertel, and Moss. All of these comments suggest ways in which my presentation could be extended or improved; I generally agree with their suggestions, so my response to their…
Descriptors: Validity, Test Interpretation, Test Use, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Moss, Pamela A. – Journal of Educational Measurement, 2013
Studies of data use illuminate ways in which education professionals have used test scores and other evidence relevant to students' learning--in action in their own contexts of work--to make decisions about their practice. These studies raise instructive challenges for a validity theory that focuses on intended interpretations and uses of test…
Descriptors: Validity, Test Use, Test Interpretation, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael T. – Journal of Educational Measurement, 2013
To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based…
Descriptors: Test Interpretation, Validity, Scores, Test Use
Peer reviewed Peer reviewed
Harnisch, Delwyn L. – Journal of Educational Measurement, 1983
The Student-Problem (S-P) methodology is described using an example of 24 students on a test of 44 items. Information based on the students' test score and the modified caution index is put to diagnostic use. A modification of the S-P methodology is applied to domain-referenced testing. (Author/CM)
Descriptors: Academic Achievement, Educational Practices, Item Analysis, Responses
Peer reviewed Peer reviewed
Lord, Frederic M. – Journal of Educational Measurement, 1984
Four methods are outlined for estimating or approximating from a single test administration the standard error of measurement of number-right test score at specified ability levels or cutting scores. The methods are illustrated and compared on one set of real test data. (Author)
Descriptors: Academic Ability, Cutting Scores, Error of Measurement, Scoring Formulas
Peer reviewed Peer reviewed
Frisbie, David A. – Journal of Educational Measurement, 1992
This guide for school administrators is written to promote careful and wise use of scores from standardized achievement tests. Authors of two sections particularly criticized in the review respond about what should be included in a primer on testing and interpreting test scores for compensatory education students. (SLD)
Descriptors: Achievement Tests, Administrator Role, Compensatory Education, Educational Assessment