Publication Date
| In 2026 | 0 |
| Since 2025 | 38 |
| Since 2022 (last 5 years) | 225 |
| Since 2017 (last 10 years) | 570 |
| Since 2007 (last 20 years) | 1377 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 110 |
| Practitioners | 107 |
| Teachers | 46 |
| Administrators | 25 |
| Policymakers | 24 |
| Counselors | 12 |
| Parents | 7 |
| Students | 7 |
| Support Staff | 4 |
| Community | 2 |
Location
| California | 61 |
| Canada | 60 |
| United States | 57 |
| Turkey | 47 |
| Australia | 43 |
| Florida | 34 |
| Germany | 26 |
| Texas | 26 |
| China | 25 |
| Netherlands | 25 |
| Iran | 22 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Flaugher, Ronald L. – 1973
Complexities of test fairness are described in nontechnical language, and their implications for the selection procedures practiced in our society are discussed. Four clearly distinguishable models of fair selection are presented: the Cleary, or traditional, model; the Cole model; the Thorndike model; and the Darlington model. A distinction is…
Descriptors: Models, Selection, Test Bias, Testing Problems
Peer reviewedNorborg, James M. – Personnel Psychology, 1984
Analyzes the simplified approach to applying the Cleary Model of personnel selection tests recommended by Lawshe (1983) and Reynolds (1980, 1982). Analysis indicated that the simplified method can produce misleading results. (LLL)
Descriptors: Personnel Selection, Test Bias, Test Results
Park, Hye-Sook; Pearson, P. David; Reckase, Mark D. – Reading Psychology an international quarterly, 2004
Differential item functioning (DIF) statistics were computed using items from the Peabody Individual Achievement Test (PIAT)-Reading Comprehension subtest for children of the same age group (ages 7 through 12 respectively). The pattern of observed DIF items was determined by comparing each cohort across age groups. Differences related to race and…
Descriptors: Age, Sentences, Achievement Tests, Test Bias
Huang, Chiungjung – Educational and Psychological Measurement, 2009
This study examined the percentage of task-sampling variability in performance assessment via a meta-analysis. In total, 50 studies containing 130 independent data sets were analyzed. Overall results indicate that the percentage of variance for (a) differential difficulty of task was roughly 12% and (b) examinee's differential performance of the…
Descriptors: Test Bias, Research Design, Performance Based Assessment, Performance Tests
Herman, Geoffrey Lindsay – ProQuest LLC, 2011
Instructors in electrical and computer engineering and in computer science have developed innovative methods to teach digital logic circuits. These methods attempt to increase student learning, satisfaction, and retention. Although there are readily accessible and accepted means for measuring satisfaction and retention, there are no widely…
Descriptors: Grounded Theory, Delphi Technique, Concept Formation, Misconceptions
Kim, Seock-Ho; Cohen, Allan S.; Alagoz, Cigdem; Kim, Sukwoo – Journal of Educational Measurement, 2007
Data from a large-scale performance assessment (N = 105,731) were analyzed with five differential item functioning (DIF) detection methods for polytomous items to examine the congruence among the DIF detection methods. Two different versions of the item response theory (IRT) model-based likelihood ratio test, the logistic regression likelihood…
Descriptors: Performance Based Assessment, Performance Tests, Item Response Theory, Test Bias
Zumbo, Bruno D. – Language Assessment Quarterly, 2007
The purpose of this article is to reflect on the state of the theorizing and praxis of DIF in general: where it has been; where it is now; and where I think it is, and should, be going. Along the way the major trends in the differential item functioning (DIF) literature are summarized and integrated providing some organizing principles that allow…
Descriptors: Test Bias, Evaluation Research, Research Methodology, Regression (Statistics)
Fidalgo, Angel M.; Hashimoto, Kanako; Bartram, Dave; Muniz, Jose – Journal of Experimental Education, 2007
In this study, the authors assess several strategies created on the basis of the Mantel-Haenszel (MH) procedure for conducting differential item functioning (DIF) analysis with small samples. One of the analytical strategies is a loss function (LF) that uses empirical Bayes Mantel-Haenszel estimators, whereas the other strategies use the classical…
Descriptors: Bayesian Statistics, Test Bias, Statistical Analysis, Sample Size
Colom, Roberto; Abad, Francisco J. – Intelligence, 2007
Mackintosh and Bennett's [Mackintosh, N. J. and Bennett, E. S, (2005). ''What do Raven's Matrices measure? An analysis in terms of sex differences.'' Intelligence 33: 663-674.] study shows that males outperform females in some APM items but not in others, implicating that these items are measuring discriminable mental processes. The present…
Descriptors: Test Bias, Gender Differences, Cognitive Processes, Measures (Individuals)
Reed, Deborah Kay – ProQuest LLC, 2010
This measurement study examined the construct validity of the retell component of the Texas Middle School Fluency Assessment (Texas Education Agency, University of Houston, & The University of Texas System, 2008a) within a confirmatory factor analysis framework. The role of retell, provided after a one-minute oral reading fluency measure, was…
Descriptors: Reading Fluency, Construct Validity, Interrater Reliability, Identification
Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald – Assessment and Accountability Comprehensive Center, 2010
This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…
Descriptors: Multiple Choice Tests, Test Items, Benchmarking, Educational Assessment
Young, John W. – Educational Assessment, 2009
In this article, I specify a conceptual framework for test validity research on content assessments taken by English language learners (ELLs) in U.S. schools in grades K-12. This framework is modeled after one previously delineated by Willingham et al. (1988), which was developed to guide research on students with disabilities. In this framework…
Descriptors: Test Validity, Evaluation Research, Achievement Tests, Elementary Secondary Education
Yu, Lei; Moses, Tim; Puhan, Gautam; Dorans, Neil – ETS Research Report Series, 2008
All differential item functioning (DIF) methods require at least a moderate sample size for effective DIF detection. Samples that are less than 200 pose a challenge for DIF analysis. Smoothing can improve upon the estimation of the population distribution by preserving major features of an observed frequency distribution while eliminating the…
Descriptors: Test Bias, Item Response Theory, Sample Size, Evaluation Criteria
Ong, Saw Lan; Sireci, Stephen G. – Online Submission, 2008
Many researchers and the International Test Commission's (Hambleton, 2005) caution against treating scores from different language versions of a test as equivalent, without conducting empirical research to verify such equivalence. In this study, we evaluated the equivalence of English and Malay versions of a 9th-grade math test administered in…
Descriptors: Test Bias, Bilingual Students, Mathematics Achievement, Mathematics Tests
Riegg, Stephanie K. – Review of Higher Education, 2008
This article highlights the problem of omitted variable bias in research on the causal effect of financial aid on college-going. I first describe the problem of self-selection and the resulting bias from omitted variables. I then assess and explore the strengths and weaknesses of random assignment, multivariate regression, proxy variables, fixed…
Descriptors: Research Methodology, Causal Models, Inferences, Test Bias

Direct link
