Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 14 |
Descriptor
| Test Bias | 105 |
| Testing Programs | 105 |
| Elementary Secondary Education | 43 |
| Testing Problems | 34 |
| State Programs | 32 |
| Test Validity | 29 |
| Achievement Tests | 22 |
| Standardized Tests | 22 |
| Test Construction | 22 |
| Educational Assessment | 21 |
| Test Use | 18 |
| More ▼ | |
Source
Author
| Dorans, Neil J. | 3 |
| Green, Donald Ross | 2 |
| Herman, Joan L. | 2 |
| Neill, Monty | 2 |
| Phelps, Richard P. | 2 |
| Akour, Mutasem | 1 |
| Albano, Anthony D. | 1 |
| Allina, Amy | 1 |
| Ambrosio, John | 1 |
| And Others. | 1 |
| Angel, Dan | 1 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 4 |
| Early Childhood Education | 3 |
| Elementary Education | 3 |
| Grade 3 | 3 |
| Grade 4 | 3 |
| Grade 5 | 3 |
| Grade 6 | 3 |
| Grade 7 | 3 |
| Grade 8 | 3 |
| Intermediate Grades | 3 |
| Junior High Schools | 3 |
| More ▼ | |
Location
| Florida | 5 |
| United States | 4 |
| New York | 3 |
| Texas | 3 |
| California | 2 |
| Arizona | 1 |
| Australia | 1 |
| Canada | 1 |
| China | 1 |
| Denmark | 1 |
| France | 1 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 4 |
| Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Akour, Mutasem; Sabah, Saed; Hammouri, Hind – Journal of Psychoeducational Assessment, 2015
The purpose of this study was to apply two types of Differential Item Functioning (DIF), net and global DIF, as well as the framework of Differential Step Functioning (DSF) to real testing data to investigate measurement invariance related to test language. Data from the Program for International Student Assessment (PISA)-2006 polytomously scored…
Descriptors: Test Bias, Science Tests, Test Items, Scoring
New York State Education Department, 2016
This technical report provides detailed information regarding the technical, statistical, and measurement attributes of the New York State Testing Program (NYSTP) for the Grades 3-8 Common Core English Language Arts (ELA) and Mathematics 2016 Operational Tests. This report includes information about test content and test development, item (i.e.,…
Descriptors: Testing Programs, English, Language Arts, Mathematics Tests
New York State Education Department, 2015
This technical report provides detailed information regarding the technical, statistical, and measurement attributes of the New York State Testing Program (NYSTP) for the Grades 3-8 Common Core English Language Arts (ELA) and Mathematics 2015 Operational Tests. This report includes information about test content and test development, item (i.e.,…
Descriptors: Testing Programs, English, Language Arts, Mathematics Tests
New York State Education Department, 2014
This technical report provides detailed information regarding the technical, statistical, and measurement attributes of the New York State Testing Program (NYSTP) for the Grades 3-8 Common Core English Language Arts (ELA) and Mathematics 2014 Operational Tests. This report includes information about test content and test development, item (i.e.,…
Descriptors: Testing Programs, English, Language Arts, Mathematics Tests
Benítez, Isabel; Padilla, José-Luis – Journal of Mixed Methods Research, 2014
Differential item functioning (DIF) can undermine the validity of cross-lingual comparisons. While a lot of efficient statistics for detecting DIF are available, few general findings have been found to explain DIF results. The objective of the article was to study DIF sources by using a mixed method design. The design involves a quantitative phase…
Descriptors: Foreign Countries, Mixed Methods Research, Test Bias, Cross Cultural Studies
Doorey, Nancy A. – Council of Chief State School Officers, 2011
The work reported in this paper reflects a collaborative effort of many individuals representing multiple organizations. It began during a session at the October 2008 meeting of TILSA when a representative of a member state asked the group if any of their programs had experienced unexpected fluctuations in the annual state assessment scores, and…
Descriptors: Testing, Sampling, Expertise, Testing Programs
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Paek, Insu; Guo, Hongwen – Applied Psychological Measurement, 2011
This study examined how much improvement was attainable with respect to accuracy of differential item functioning (DIF) measures and DIF detection rates in the Mantel-Haenszel procedure when employing focal and reference groups with notably unbalanced sample sizes where the focal group has a fixed small sample which does not satisfy the minimum…
Descriptors: Test Bias, Accuracy, Reference Groups, Investigations
French, Brian F.; Finch, W. Holmes – Journal of Educational Measurement, 2010
The purpose of this study was to examine the performance of differential item functioning (DIF) assessment in the presence of a multilevel structure that often underlies data from large-scale testing programs. Analyses were conducted using logistic regression (LR), a popular, flexible, and effective tool for DIF detection. Data were simulated…
Descriptors: Test Bias, Testing Programs, Evaluation, Measurement
Sinharay, Sandip; Dorans, Neil J.; Liang, Longjuan – Educational Measurement: Issues and Practice, 2011
Over the past few decades, those who take tests in the United States have exhibited increasing diversity with respect to native language. Standard psychometric procedures for ensuring item and test fairness that have existed for some time were developed when test-taking groups were predominantly native English speakers. A better understanding of…
Descriptors: Test Bias, Testing Programs, Psychometrics, Language Proficiency
Dorans, Neil J.; Liu, Jinghua – Educational Testing Service, 2009
The equating process links scores from different editions of the same test. For testing programs that build nearly parallel forms to the same explicit content and statistical specifications and administer forms under the same conditions, the linkings between the forms are expected to be equatings. Score equity assessment (SEA) provides a useful…
Descriptors: Testing Programs, Mathematics Tests, Quality Control, Psychometrics
Bilir, Mustafa Kuzey – ProQuest LLC, 2009
This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…
Descriptors: Test Items, Testing Programs, Markov Processes, Psychometrics
Puhan, Gautam; Moses, Timothy P.; Yu, Lei; Dorans, Neil J. – Journal of Educational Measurement, 2009
This study examined the extent to which log-linear smoothing could improve the accuracy of differential item functioning (DIF) estimates in small samples of examinees. Examinee responses from a certification test were analyzed using White examinees in the reference group and African American examinees in the focal group. Using a simulation…
Descriptors: Test Items, Reference Groups, Testing Programs, Raw Scores
National Inst. of Education (ED), Washington, DC. – 1981
Barbara Jordan served as the hearing officer for three-day adversary evaluation hearings about the pros and cons of minimum competency testing (MCT). This report is the complete transcript of the first day of proceedings. James Popham and George Madaus presented the opening arguments for the pro team and con team, respectively. Michael Scriven,…
Descriptors: Elementary Secondary Education, Hearings, Minimum Competency Testing, Test Bias
Dyer, Henry S. – NJEA Review, 1973
Retired vice-president of Educational Testing Service asserts that chances for tests being misused are greater than ever. Speech delivered at ETS's Invitational Conference on Testing Problems on October 28, 1972, in New York, New York. (DS)
Descriptors: Group Testing, Intelligence Tests, Measurement Techniques, Test Bias

Peer reviewed
Direct link
