NotesFAQContact Us
Collection
Advanced
Search Tips
Location
Australia1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jessica B. Koslouski; Sandra M. Chafouleas; Amy Briesch; Jacqueline M. Caemmerer; Brittany Melo – School Mental Health, 2024
We are developing the Equitable Screening to Support Youth (ESSY) Whole Child Screener to address concerns prevalent in existing school-based screenings that impede goals to advance educational equity using universal screeners. Traditional assessment development does not include end users in the early development phases, instead relying on a…
Descriptors: Screening Tests, Psychometrics, Validity, Child Development
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Della-Piana, Gabriel M.; Gardner, Michael K.; Mayne, Zachary M. – Journal of Research Practice, 2018
The authors describe challenges of following professional standards for educational achievement testing due to the complexity of gathering appropriate evidence to support demanding test interpretation and use. Validity evidence has been found to be low for some individual testing standards, leading to the possibility of faulty or impoverished test…
Descriptors: Achievement Tests, Standards, Educational Assessment, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Hui; van Rijn, Peter; Moore, John C.; Bauer, Malcolm I.; Pressler, Yamina; Yestness, Nissa – International Journal of Science Education, 2019
This article provides a validation framework for research on the development and use of science Learning Progressions (LPs). The framework describes how evidence from various sources can be used to establish an interpretive argument and a validity argument at five stages of LP research--development, scoring, generalisation, extrapolation, and use.…
Descriptors: Sequential Approach, Educational Research, Science Education, Validity
Jin, Hui; van Rijn, Peter; Moore, John C.; Bauer, Malcolm I.; Pressler, Yamina; Yestness, Nissa – Grantee Submission, 2019
This article provides a validation framework for research on the development and use of science Learning Progressions (LPs). The framework describes how evidence from various sources can be used to establish an interpretive argument and a validity argument at five stages of LP research--development, scoring, generalisation, extrapolation, and use.…
Descriptors: Sequential Approach, Educational Research, Science Education, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Ketterlin-Geller, Leanne R.; Perry, Lindsey; Adams, Elizabeth – Applied Measurement in Education, 2019
Despite the call for an argument-based approach to validity over 25 years ago, few examples exist in the published literature. One possible explanation for this outcome is that the complexity of the argument-based approach makes implementation difficult. To counter this claim, we propose that the Assessment Triangle can serve as the overarching…
Descriptors: Validity, Educational Assessment, Models, Screening Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Jessica B. Koslouski; Sandra M. Chafouleas; Amy Briesch; Jacqueline M. Caemmerer; Brittany Melo – Grantee Submission, 2024
We are developing the Equitable Screening to Support Youth (ESSY) Whole Child Screener to address concerns prevalent in existing school-based screenings that impede goals to advance educational equity using universal screeners. Traditional assessment development does not include end users in the early development phases, instead relying on a…
Descriptors: Screening Tests, Usability, Decision Making, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Newton, Paul E. – Journal of Educational Measurement, 2013
Kane distinguishes between two kinds of argument: the interpretation/use argument and the validity argument. This commentary considers whether there really are two kinds of argument, two arguments, or just one. It concludes that there is just one argument: the validity argument. (Contains 2 figures and 5 notes.)
Descriptors: Validity, Test Interpretation, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Sireci, Stephen G. – Journal of Educational Measurement, 2013
Kane (this issue) presents a comprehensive review of validity theory and reminds us that the focus of validation is on test score interpretations and use. In reacting to his article, I support the argument-based approach to validity and all of the major points regarding validation made by Dr. Kane. In addition, I call for a simpler, three-step…
Descriptors: Validity, Theories, Test Interpretation, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Borsboom, Denny; Markus, Keith A. – Journal of Educational Measurement, 2013
According to Kane (this issue), "the validity of a proposed interpretation or use depends on how well the evidence supports" the claims being made. Because truth and evidence are distinct, this means that the validity of a test score interpretation could be high even though the interpretation is false. As an illustration, we discuss the case of…
Descriptors: Evidence, Ethics, Validity, Theories
Peer reviewed Peer reviewed
Direct linkDirect link
Brennan, Robert L. – Journal of Educational Measurement, 2013
Kane's paper "Validating the Interpretations and Uses of Test Scores" is the most complete and clearest discussion yet available of the argument-based approach to validation. At its most basic level, validation as formulated by Kane is fundamentally a simply-stated two-step enterprise: (1) specify the claims inherent in a particular interpretation…
Descriptors: Validity, Test Interpretation, Test Use, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Chapelle, Carol A. – Language Testing, 2012
According to Kane (2006), the argument-based framework is quite simple and involves two steps. First, specify the proposed interpretations and uses of the scores in some detail. Second, evaluate the overall plausibility of the proposed interpretations and uses. Based on experience gained in developing that validity argument, Chapelle, Enright, and…
Descriptors: Validity, Language Tests, Test Interpretation, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael T. – Journal of Educational Measurement, 2013
This response to the comments contains three main sections, each addressing a subset of the comments. In the first section, I will respond to the comments by Brennan, Haertel, and Moss. All of these comments suggest ways in which my presentation could be extended or improved; I generally agree with their suggestions, so my response to their…
Descriptors: Validity, Test Interpretation, Test Use, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Moss, Pamela A. – Journal of Educational Measurement, 2013
Studies of data use illuminate ways in which education professionals have used test scores and other evidence relevant to students' learning--in action in their own contexts of work--to make decisions about their practice. These studies raise instructive challenges for a validity theory that focuses on intended interpretations and uses of test…
Descriptors: Validity, Test Use, Test Interpretation, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael T. – Journal of Educational Measurement, 2013
To validate an interpretation or use of test scores is to evaluate the plausibility of the claims based on the scores. An argument-based approach to validation suggests that the claims based on the test scores be outlined as an argument that specifies the inferences and supporting assumptions needed to get from test responses to score-based…
Descriptors: Test Interpretation, Validity, Scores, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Tan; Mak, Barley; Zhou, Pei – Language Testing, 2012
The fuzziness of assessing second language speaking performance raises two difficulties in scoring speaking performance: "indistinction between adjacent levels" and "overlap between scales". To address these two problems, this article proposes a new approach, "confidence scoring", to deal with such fuzziness, leading to "confidence" scores between…
Descriptors: Speech Communication, Scoring, Test Interpretation, Second Language Learning
Previous Page | Next Page »
Pages: 1  |  2