NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 96 results Save | Export
Anne H. Davidson – National Assessment Governing Board, 2025
The purpose of this National Assessment of Educational Progress (NAEP) Achievement Levels Validity Argument Report is to synthesize evidence currently available to address the validity of the interpretations and uses of the NAEP Achievement Levels. Validity is the extent to which theory and evidence supports or refutes proposed and enacted test…
Descriptors: National Competency Tests, Academic Achievement, Test Validity, College Entrance Examinations
Areekkuzhiyil, Santhosh – Online Submission, 2021
Assessment is an integral part of any teaching learning process. Assessment has large number of functions to perform, whether it is formative or summative. This paper analyse the issues involved and the areas of concern in the classroom assessment practice and discusses the recent reforms take place. [This paper was published in Edutracks v20 n8…
Descriptors: Student Evaluation, Formative Evaluation, Summative Evaluation, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
O'Leary, Timothy M.; Hattie, John A. C.; Griffin, Patrick – Educational Measurement: Issues and Practice, 2017
Validity is the most fundamental consideration in test development. Understandably, much time, effort, and money is spent in its pursuit. Central to the modern conception of validity are the interpretations made, and uses planned, on the basis of test scores. There is, unfortunately, however, evidence that test users have difficulty understanding…
Descriptors: Test Interpretation, Scores, Test Validity, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Jacobson, Erik; Svetina, Dubravka – Applied Measurement in Education, 2019
Contingent argument-based approaches to validity require a unique argument for each use, in contrast to more prescriptive approaches that identify the common kinds of validity evidence researchers should consider for every use. In this article, we evaluate our use of an approach that is both prescriptive "and" argument-based to develop a…
Descriptors: Test Validity, Test Items, Test Construction, Test Interpretation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Patrick Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Report Series, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international large-scale assessments of cognitive and…
Descriptors: Assessment Literacy, Testing, Test Bias, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Geisinger, Kurt F. – Assessment in Education: Principles, Policy & Practice, 2016
The six primary papers in this issue of "Assessment in Education" emphasise a single primary point: the concept of validity is a complex one. Essentially, validity is a collective noun. That is, just as a group of players may be called a team and a group of geese a flock, so too does validity represent a variety of processes and…
Descriptors: Test Validity, Definitions, Standards, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Language Assessment Quarterly, 2017
As the number of candidates who repeat English language proficiency tests more than once to meet a certain cutscore (e.g., for university admission) or to demonstrate progress (e.g., after instruction) continues to increase dramatically, there is a need for more research on the attributes and test performance of test repeaters. This article…
Descriptors: Language Tests, Second Languages, Language Proficiency, Repetition
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael T. – Assessment in Education: Principles, Policy & Practice, 2016
How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…
Descriptors: Test Validity, Test Interpretation, Test Use, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Gafni, Naomi – Assessment in Education: Principles, Policy & Practice, 2016
Naomi Gafni, director of Research and Development, National Institute for Testing and Evaluation, Jerusalem, Israel, has devoted a substantial part of her career to the development of admissions tests and other educational tests and to the investigation of their validity. As such she is keenly aware of the complexities involved in this process.…
Descriptors: Test Validity, Test Interpretation, Test Use, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Karren, Benjamin C. – Journal of Psychoeducational Assessment, 2017
The Gilliam Autism Rating Scale-Third Edition (GARS-3) is a norm-referenced tool designed to screen for autism spectrum disorders (ASD) in individuals between the ages of 3 and 22 (Gilliam, 2014). The GARS-3 test kit consists of three different components and includes an "Examiner's Manual," summary/response forms (50), and the…
Descriptors: Autism, Pervasive Developmental Disorders, Rating Scales, Norm Referenced Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Papageorgiou, Spiros; Tannenbaum, Richard J. – Language Assessment Quarterly, 2016
Although there has been substantial work on argument-based approaches to validation as well as standard-setting methodologies, it might not always be clear how standard setting fits into argument-based validity. The purpose of this article is to address this lack in the literature, with a specific focus on topics related to argument-based…
Descriptors: Standard Setting (Scoring), Language Tests, Test Validity, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Aloisi, Cesare; Callaghan, A. – Higher Education Pedagogies, 2018
The University of Reading Learning Gain project is a three-year longitudinal project to test and evaluate a range of available methodologies and to draw conclusions on what might be the right combination of instruments for the measurement of Learning Gain in higher education. This paper analyses the validity of a measure of critical thinking…
Descriptors: Foreign Countries, Cognitive Tests, Critical Thinking, Thinking Skills
Boyer, Michelle; Landl, Erika – National Center on Educational Outcomes, 2021
This Brief contains a scan of the interim assessment landscape, and is focused on the availability of documentation supporting the appropriateness of these assessments for students with disabilities. The purpose of this Brief is to advise the development of guidance that facilitates improved practices related to the use of interim assessments for…
Descriptors: Students with Disabilities, Student Evaluation, Formative Evaluation, Inclusion
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Reynolds, Matthew R.; Niileksela, Christopher R. – Journal of Psychoeducational Assessment, 2015
"The Woodcock-Johnson IV Tests of Cognitive Abilities" (WJ IV COG) is an individually administered measure of psychometric intellectual abilities designed for ages 2 to 90+. The measure was published by Houghton Mifflin Harcourt-Riverside in 2014. Frederick Shrank, Kevin McGrew, and Nancy Mather are the authors. Richard Woodcock, the…
Descriptors: Cognitive Tests, Testing, Scoring, Test Interpretation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7