Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 17 |
Descriptor
Source
Author
Ebel, Robert L. | 4 |
Hilliard, Asa G., III | 4 |
Bond, Lloyd | 3 |
Green, Donald Ross | 3 |
Cizek, Gregory J. | 2 |
Clarizio, Harvey F. | 2 |
Johnson, Sylvia T. | 2 |
Milton, Ohmer | 2 |
Pearson, P. David | 2 |
Popham, W. James | 2 |
Schell, Leo M. | 2 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 12 |
Elementary Education | 2 |
Location
Canada | 3 |
United Kingdom | 3 |
United States | 3 |
California | 2 |
United Kingdom (England) | 2 |
Africa | 1 |
Australia | 1 |
Florida | 1 |
Illinois | 1 |
New York | 1 |
New York (New York) | 1 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 2 |
Bakke v Regents of University… | 1 |
Debra P v Turlington | 1 |
Education for All Handicapped… | 1 |
Goals 2000 | 1 |
Immigration Reform and… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Kettler, Ryan J. – School Psychology International, 2020
This article is a commentary on McGill et al.'s (2020) article "Use of Translated and Adapted Versions of the WISC-V: Caveat Emptor." McGill et al. use caveat emptor in their title to indicate that the buyer of an assessment must be careful about the product being purchased, presumably because the seller of the assessment is not being…
Descriptors: Children, Intelligence Tests, Translation, Test Reliability
Koretz, Daniel – Assessment in Education: Principles, Policy & Practice, 2016
Daniel Koretz is the Henry Lee Shattuck Professor of Education at the Harvard Graduate School of Education. His research focuses on educational assessment and policy, particularly the effects of high-stakes testing on educational practice and the validity of score gains. He is the author of "Measuring Up: What Educational Testing Really Tells…
Descriptors: Test Validity, Definitions, Evidence, Relevance (Education)
Zumbo, Bruno D.; Hubley, Anita M. – Assessment in Education: Principles, Policy & Practice, 2016
Ultimately, measures in research, testing, assessment and evaluation are used, or have implications, for ranking, intervention, feedback, decision-making or policy purposes. Explicit recognition of this fact brings the often-ignored and sometimes maligned concept of consequences to the fore. Given that measures have personal and social…
Descriptors: Testing Programs, Testing Problems, Measurement Techniques, Student Evaluation
Brown, James Dean; Salmani Nodoushan, Mohammad Ali – Online Submission, 2015
In this interview, JD Brown reflects on language testing/assessment. He suggests that language testing can be seen as a continuum with hard core positivist approaches at one end and post modernist interpretive perspectives at the other, and also argues that norm referencing (be it proficiency, placement, or aptitude testing) and criterion…
Descriptors: Interviews, Language Tests, English (Second Language), Second Language Learning
Hill, Kathryn; McNamara, Tim – Measurement: Interdisciplinary Research and Perspectives, 2015
Those who work in second- and foreign-language testing often find Koretz's concern for validity inferences under high-stakes (VIHS) conditions both welcome and familiar. While the focus of the article is more narrowly on the potential for two instructional responses to test-based accountability, "reallocation" and "coaching,"…
Descriptors: Language Tests, Test Validity, High Stakes Tests, Inferences
Baird, Jo-Anne – Measurement: Interdisciplinary Research and Perspectives, 2010
Newton's article (2010) makes three main contributions to the literature. First, it is transatlantic, bringing together literatures that have been dealing with similar problems, using sometimes different methods and certainly with distinctive educational, cultural perspectives. He points out that neither of these literatures has all of the…
Descriptors: Foreign Countries, Predictive Validity, Standards, Ethics
Walker, Michael E. – Measurement: Interdisciplinary Research and Perspectives, 2010
"Linking" is a term given to a general class of procedures by which one represents scores X on one test or measure in terms of scores Y on another test or measure. A recent taxonomy by Holland and Dorans (2006; Holland, 2007) organizes the various types of links into three broad categories: prediction, scale aligning, and equating. In…
Descriptors: Foreign Countries, Test Construction, Test Validity, Measurement Techniques
Alonzo, Alicia C. – Measurement: Interdisciplinary Research and Perspectives, 2007
Schilling et al. (this issue) have done a commendable job in illustrating a comprehensive process of validating assessments of teacher knowledge (and, more broadly, other types of tests as well). On one hand, the concrete illustration of a process that often remains murky and incomplete is profoundly heartening, as it provides a rigorous model for…
Descriptors: Mathematics Education, Teacher Characteristics, Mathematics Instruction, Knowledge Base for Teaching
von Davier, Alina A. – Measurement: Interdisciplinary Research and Perspectives, 2010
The article "Thinking About Linking" by Newton (2010) presents a novel philosophical perspective on the way that educational assessments should be linked. Newton starts by describing the linking framework as it was characterized in various publications and identifies a cross-cultural dimension in the definitions and uses of test…
Descriptors: Foreign Countries, Educational Assessment, Student Evaluation, Evaluation Criteria

Madaus, George F. – Educational Measurement: Issues and Practice, 1986
This reply to William A. Mehrens argues that test validity is the central issue in discussing the appropriate role of tests. It states that the procedures used to establish the validity of tests are inadequate because they depend primarily on content validity and not on construct and criterion validity. (JAZ)
Descriptors: Concurrent Validity, Construct Validity, Cutting Scores, Decision Making
Schoenfeld, Alan H. – Measurement: Interdisciplinary Research and Perspectives, 2007
The authors of this volume's stimulus papers have taken on the challenge of developing measures of teachers' mathematical knowledge for teaching (MKT). This task involves multiple decisions and considerations, including: (1) How does one specify the body of knowledge being assessed? What warrants are offered for those choices?; (2) How does one…
Descriptors: Test Validity, Psychometrics, Test Construction, Evaluation Research
Gearhart, Maryl – Measurement: Interdisciplinary Research and Perspectives, 2007
Teacher knowledge has been of theoretical and empirical interest for over two decades, and development of measures is overdue. The researchers represented in this volume have been breaking new ground by developing a measure of mathematical knowledge for teaching (MKT) without guiding precedents, and in the face of differing perspectives on teacher…
Descriptors: Learning Theories, Elementary School Mathematics, Teaching Methods, Construct Validity
Kulikowich, Jonna M. – Measurement: Interdisciplinary Research and Perspectives, 2007
Operating from multiple literature bases in cognitive psychology, mathematics education, and theoretical and applied psychometrics, Schilling, Hill and their colleagues provide a systemic approach to studying the validity of scores of mathematical knowledge for teaching. This system encompasses an array of task formats and methodologies. The…
Descriptors: Multiple Choice Tests, Learning Theories, Teaching Methods, Construct Validity
Hill, Heather C. – Measurement: Interdisciplinary Research and Perspectives, 2007
The author offers some thoughts on commentator's reactions to the substance of the measures, particularly those about measuring teacher learning and change, based on the major uses of the measures, and because this is a significant challenge facing test development as an enterprise. If teacher learning results in more integrated knowledge or…
Descriptors: Educational Testing, Tests, Measurement, Faculty Development
Schilling, Stephen – Measurement: Interdisciplinary Research and Perspectives, 2007
In this article, the author echoes his co-author's and colleague's pleasure (Hill, this issue) at the thoughtfulness and far-ranging nature of the comments to their initial attempts at test validation for the mathematical knowledge for teaching (MKT) measures using the validity argument approach. Because of the large number of commentaries they…
Descriptors: Generalizability Theory, Persuasive Discourse, Educational Testing, Measurement