Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 6 |
Descriptor
Source
Measurement:… | 6 |
Author
Alonzo, Alicia C. | 1 |
Behizadeh, Nadia | 1 |
Engelhard, George, Jr. | 1 |
Gearhart, Maryl | 1 |
Haertel, Edward | 1 |
Hill, Kathryn | 1 |
Kulikowich, Jonna M. | 1 |
McNamara, Tim | 1 |
Publication Type
Journal Articles | 6 |
Opinion Papers | 5 |
Reports - Evaluative | 3 |
Education Level
Elementary Secondary Education | 4 |
Elementary Education | 2 |
Audience
Location
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
Program for International… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Behizadeh, Nadia; Engelhard, George, Jr. – Measurement: Interdisciplinary Research and Perspectives, 2015
In his focus article, Koretz (this issue) argues that accountability has become the primary function of large-scale testing in the United States. He then points out that tests being used for accountability purposes are flawed and that the high-stakes nature of these tests creates a context that encourages score inflation. Koretz is concerned about…
Descriptors: Communities of Practice, High Stakes Tests, Testing, Test Validity
Hill, Kathryn; McNamara, Tim – Measurement: Interdisciplinary Research and Perspectives, 2015
Those who work in second- and foreign-language testing often find Koretz's concern for validity inferences under high-stakes (VIHS) conditions both welcome and familiar. While the focus of the article is more narrowly on the potential for two instructional responses to test-based accountability, "reallocation" and "coaching,"…
Descriptors: Language Tests, Test Validity, High Stakes Tests, Inferences
Haertel, Edward – Measurement: Interdisciplinary Research and Perspectives, 2013
Validation research for educational achievement tests is often limited to an examination of intended test score interpretations. This article calls for an expansion of validation research in three dimensions. First, validation must attend to actual test use and its consequences, not just score meaning. Second, validation must attend to unintended…
Descriptors: Educational Testing, Educational Improvement, Test Validity, Achievement Tests
Alonzo, Alicia C. – Measurement: Interdisciplinary Research and Perspectives, 2007
Schilling et al. (this issue) have done a commendable job in illustrating a comprehensive process of validating assessments of teacher knowledge (and, more broadly, other types of tests as well). On one hand, the concrete illustration of a process that often remains murky and incomplete is profoundly heartening, as it provides a rigorous model for…
Descriptors: Mathematics Education, Teacher Characteristics, Mathematics Instruction, Knowledge Base for Teaching
Gearhart, Maryl – Measurement: Interdisciplinary Research and Perspectives, 2007
Teacher knowledge has been of theoretical and empirical interest for over two decades, and development of measures is overdue. The researchers represented in this volume have been breaking new ground by developing a measure of mathematical knowledge for teaching (MKT) without guiding precedents, and in the face of differing perspectives on teacher…
Descriptors: Learning Theories, Elementary School Mathematics, Teaching Methods, Construct Validity
Kulikowich, Jonna M. – Measurement: Interdisciplinary Research and Perspectives, 2007
Operating from multiple literature bases in cognitive psychology, mathematics education, and theoretical and applied psychometrics, Schilling, Hill and their colleagues provide a systemic approach to studying the validity of scores of mathematical knowledge for teaching. This system encompasses an array of task formats and methodologies. The…
Descriptors: Multiple Choice Tests, Learning Theories, Teaching Methods, Construct Validity