Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 29 |
Descriptor
Source
Author
Bond, Lloyd | 5 |
Dent, Harold E. | 5 |
Hilliard, Asa G., III | 5 |
Dorans, Neil J. | 4 |
Reschly, Daniel J. | 4 |
Cummins, Jim | 3 |
Garcia, Peter A. | 3 |
Haney, Walt | 3 |
Johnson, Sylvia T. | 3 |
MacMillan, Donald L. | 3 |
Nairn, Allan | 3 |
More ▼ |
Publication Type
Education Level
Higher Education | 6 |
High Schools | 4 |
Elementary Secondary Education | 3 |
Postsecondary Education | 3 |
Early Childhood Education | 1 |
Elementary Education | 1 |
Secondary Education | 1 |
Two Year Colleges | 1 |
Location
California | 8 |
Canada | 7 |
Florida | 4 |
Australia | 3 |
South Africa | 2 |
Texas | 2 |
United Kingdom | 2 |
Virginia | 2 |
Arkansas | 1 |
Georgia | 1 |
Illinois | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Borsboom, Denny; Wijsen, Lisa D. – Assessment in Education: Principles, Policy & Practice, 2017
The central role of educational testing practices in contemporary societies can hardly be overstated. It is furthermore evident that psychometric models regulate, justify, and legitimize the processes through which educational testing practices are used. In this commentary, the authors offer some observations that may be relevant for the analyses…
Descriptors: Educational Assessment, Learning, Psychometrics, Power Structure
Ackerman, Terry – Journal of Educational and Behavioral Statistics, 2016
In this commentary, University of North Carolina's associate dean of research and assessment at the School of Education Terry Ackerman poses questions and shares his thoughts on David Thissen's essay, "Bad Questions: An Essay Involving Item Response Theory" (this issue). Ackerman begins by considering the two purposes of Item Response…
Descriptors: Item Response Theory, Test Items, Selection, Scores
Oberski, Daniel L.; Vermunt, Jeroen K. – Measurement: Interdisciplinary Research and Perspectives, 2013
These authors congratulate Albert Maydeu-Olivares on his lucid and timely overview of goodness-of-fit assessment in IRT models, a field to which he himself has contributed considerably in the form of limited information statistics. In this commentary, Oberski and Vermunt focus on two aspects of model fit: (1) what causes there may be of misfit;…
Descriptors: Goodness of Fit, Item Response Theory, Models, Test Bias
Sinharay, Sandip; Dorans, Neil J. – Journal of Educational and Behavioral Statistics, 2010
The Mantel-Haenszel (MH) procedure (Mantel and Haenszel) is a popular method for estimating and testing a common two-factor association parameter in a 2 x 2 x K table. Holland and Holland and Thayer described how to use the procedure to detect differential item functioning (DIF) for tests with dichotomously scored items. Wang, Bradlow, Wainer, and…
Descriptors: Test Bias, Statistical Analysis, Computation, Bayesian Statistics
Wainer, Howard; Bradlow, Eric; Wang, Xiaohui – Journal of Educational and Behavioral Statistics, 2010
Confucius pointed out that the first step toward wisdom is calling things by the right name. The term "Differential Item Functioning" (DIF) did not arise fully formed from the miasma of psychometrics, it evolved from a variety of less accurate terms. Among its forebears was "item bias" but that term has a pejorative connotation…
Descriptors: Test Bias, Difficulty Level, Test Items, Statistical Analysis
Mislevy, Robert J. – Educational Measurement: Issues and Practice, 2012
This article presents the author's observations on Neil Dorans's NCME Career Award Address: "The Contestant Perspective on Taking Tests: Emanations from the Statue within." He calls attention to some points that Dr. Dorans made in his address, and offers his thoughts in response.
Descriptors: Testing, Test Reliability, Psychometrics, Scores
Dorans, Neil J. – Harvard Educational Review, 2010
In his 2003 article in the "Harvard Educational Review" (HER), Freedle claimed that the SAT was both culturally and statistically biased and proposed a solution to ameliorate this bias. The author argued (Dorans, 2004a) that these claims were based on serious computational errors. In particular, he focused on how Freedle's table 2 was…
Descriptors: College Entrance Examinations, Test Bias, Test Items, Difficulty Level
Santelices, Maria Veronica; Wilson, Mark – Harvard Educational Review, 2010
In their paper "Unfair Treatment? The Case of Freedle, the SAT, and the Standardization Approach to Differential Item Functioning" (Santelices & Wilson, 2010), the authors studied claims of differential effects of the SAT on Latinos and African Americans through the methodology of differential item functioning (DIF). Previous…
Descriptors: College Entrance Examinations, Test Bias, Test Items, Difficulty Level
Pommerich, Mary – Educational Measurement: Issues and Practice, 2012
Neil Dorans has made a career of advocating for the examinee. He continues to do so in his NCME career award address, providing a thought-provoking commentary on some current trends in educational measurement that could potentially affect the integrity of test scores. Concerns expressed in the address call attention to a conundrum that faces…
Descriptors: Testing, Scores, Measurement, Test Construction
Kane, Michael – Language Testing, 2010
This paper presents the author's critique on Xiaoming Xi's article, "How do we go about investigating test fairness?," which lays out a broad framework for studying fairness as comparable validity across groups within the population of interest. Xi proposes to develop a fairness argument that would identify and evaluate potential fairness-based…
Descriptors: Test Bias, Test Validity, Language Tests, Testing
Dorans, Neil J. – Educational Testing Service, 2010
Santelices and Wilson (2010) claimed to have addressed technical criticisms of Freedle (2003) presented in Dorans (2004a) and elsewhere. Santelices and Wilson's abstract claimed that their study confirmed that SAT[R] verbal items do function differently for African American and White subgroups. In this commentary, I demonstrate that the…
Descriptors: College Entrance Examinations, Verbal Tests, Test Bias, Test Items
College Board, 2010
This is the College Board's response to a research article by Drs. Maria Veronica Santelices and Mark Wilson in the Harvard Educational Review, entitled "Unfair Treatment? The Case of Freedle, the SAT, and the Standardization Approach to Differential Item Functioning" (see EJ930622).
Descriptors: Test Bias, College Entrance Examinations, Standardized Tests, Test Items
Kunnan, Antony John – Language Testing, 2010
This paper presents the author's response to Xiaoming Xi's article titled "How do we go about investigating test fairness?" In this response, the author focuses on test fairness and Toulmin's model of argument structure, Xi's proposal, and the challenges the proposal brings. Xi proposes an approach to investigating test fairness to guide…
Descriptors: Persuasive Discourse, Inferences, Test Bias, Models
Freedle, Roy O. – Harvard Educational Review, 2010
In this commentary, the author discusses two recent replications (Santelices & Wilson, 2010; Scherbaum & Goldstein, 2008) of some of his earlier work on SAT items using the differential item functioning (DIF) statistic wherein he contrasted the test performance of African American examinees with White examinees (Freedle, 2003). In this…
Descriptors: College Entrance Examinations, Test Bias, Test Items, Difficulty Level
Helms, Janet E. – American Psychologist, 2009
In defending tests of cognitive abilities, knowledge, or skills (CAKS) from the skepticism of their "family members, friends, and neighbors" and aiding psychologists forced to defend tests from "myth and hearsay" in their own skeptical social networks (p. 215), Sackett, Borneman, and Connelly focused on evaluating validity coefficients, racial or…
Descriptors: Test Validity, Cognitive Ability, Error of Measurement, Test Bias