Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 32 |
Since 2006 (last 20 years) | 96 |
Descriptor
Comparative Analysis | 124 |
Test Bias | 124 |
Test Items | 124 |
Item Response Theory | 38 |
Statistical Analysis | 37 |
Scores | 28 |
Difficulty Level | 26 |
Foreign Countries | 25 |
Item Analysis | 21 |
Mathematics Tests | 18 |
Models | 18 |
More ▼ |
Source
Author
Ercikan, Kadriye | 3 |
Jin, Ying | 3 |
Magis, David | 3 |
Stricker, Lawrence J. | 3 |
Zumbo, Bruno D. | 3 |
Abedi, Jamal | 2 |
Bridgeman, Brent | 2 |
De Boeck, Paul | 2 |
Hambleton, Ronald K. | 2 |
Hou, Likun | 2 |
Ironson, Gail H. | 2 |
More ▼ |
Publication Type
Education Level
Higher Education | 16 |
Elementary Education | 14 |
Postsecondary Education | 14 |
Secondary Education | 13 |
Elementary Secondary Education | 9 |
Grade 8 | 8 |
Middle Schools | 8 |
Grade 4 | 7 |
Intermediate Grades | 7 |
Grade 5 | 6 |
High Schools | 6 |
More ▼ |
Audience
Researchers | 1 |
Location
Canada | 4 |
Germany | 4 |
Taiwan | 4 |
North Carolina | 2 |
Norway | 2 |
Singapore | 2 |
Spain | 2 |
Turkey | 2 |
United States | 2 |
Australia | 1 |
Austria | 1 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
Robitzsch, Alexander; Lüdtke, Oliver – Journal of Educational and Behavioral Statistics, 2022
One of the primary goals of international large-scale assessments in education is the comparison of country means in student achievement. This article introduces a framework for discussing differential item functioning (DIF) for such mean comparisons. We compare three different linking methods: concurrent scaling based on full invariance,…
Descriptors: Test Bias, International Assessment, Scaling, Comparative Analysis
Diaz, Emily; Brooks, Gordon; Johanson, George – International Journal of Assessment Tools in Education, 2021
This Monte Carlo study assessed Type I error in differential item functioning analyses using Lord's chi-square (LC), Likelihood Ratio Test (LRT), and Mantel-Haenszel (MH) procedure. Two research interests were investigated: item response theory (IRT) model specification in LC and the LRT and continuity correction in the MH procedure. This study…
Descriptors: Test Bias, Item Response Theory, Statistical Analysis, Comparative Analysis
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2021
In this study, whether item position effects lead to DIF in the condition where different test booklets are used was investigated. To do this the methods of Lord's chi-square and Raju's unsigned area with the 3PL model under with and without item purification were used. When the performance of the methods was compared, it was revealed that…
Descriptors: Item Response Theory, Test Bias, Test Items, Comparative Analysis
Zwick, Rebecca; Ye, Lei; Isham, Steven – Journal of Educational Measurement, 2018
In typical differential item functioning (DIF) assessments, an item's DIF status is not influenced by its status in previous test administrations. An item that has shown DIF at multiple administrations may be treated the same way as an item that has shown DIF in only the most recent administration. Therefore, much useful information about the…
Descriptors: Test Bias, Testing, Test Items, Bayesian Statistics
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Alexander James Kwako – ProQuest LLC, 2023
Automated assessment using Natural Language Processing (NLP) has the potential to make English speaking assessments more reliable, authentic, and accessible. Yet without careful examination, NLP may exacerbate social prejudices based on gender or native language (L1). Current NLP-based assessments are prone to such biases, yet research and…
Descriptors: Gender Bias, Natural Language Processing, Native Language, Computational Linguistics
Ames, Allison J. – Educational and Psychological Measurement, 2022
Individual response style behaviors, unrelated to the latent trait of interest, may influence responses to ordinal survey items. Response style can introduce bias in the total score with respect to the trait of interest, threatening valid interpretation of scores. Despite claims of response style stability across scales, there has been little…
Descriptors: Response Style (Tests), Individual Differences, Scores, Test Items
Russell, Michael; Szendey, Olivia; Li, Zhushan – Educational Assessment, 2022
Recent research provides evidence that an intersectional approach to defining reference and focal groups results in a higher percentage of comparisons flagged for potential DIF. The study presented here examined the generalizability of this pattern across methods for examining DIF. While the level of DIF detection differed among the four methods…
Descriptors: Comparative Analysis, Item Analysis, Test Items, Test Construction
Bundsgaard, Jeppe – Large-scale Assessments in Education, 2019
International large-scale assessments like international computer and information literacy study (ICILS) (Fraillon et al. in International Association for the Evaluation of Educational Achievement (IEA), 2015) provide important empirically-based knowledge through the proficiency scales, of what characterizes tasks at different difficulty levels,…
Descriptors: Test Bias, International Assessment, Test Items, Difficulty Level
Inal, Hatice; Anil, Duygu – Eurasian Journal of Educational Research, 2018
Purpose: This study aimed to examine the impact of differential item functioning in anchor items on the group invariance in test equating for different sample sizes. Within this scope, the factors chosen to investigate the group invariance in test equating were sample size, frequency of sample size of subgroups, differential form of differential…
Descriptors: Equated Scores, Test Bias, Test Items, Sample Size
Uyar, Seyma – Eurasian Journal of Educational Research, 2020
Purpose: This study aimed to compare the performance of latent class differential item functioning (DIF) approach and IRT based DIF methods using manifest grouping. With this study, it was thought to draw attention to carry out latent class DIF studies in Turkey. The purpose of this study was to examine DIF in PISA 2015 science data set. Research…
Descriptors: Item Response Theory, Foreign Countries, Cross Cultural Studies, Item Analysis
New Meridian Corporation, 2020
New Meridian Corporation has developed the "Quality Testing Standards and Criteria for Comparability Claims" (QTS). The goal of the QTS is to provide guidance to states that are interested in including content from the New Meridian item bank and intend to make comparability claims with "other assessments" that include New…
Descriptors: Testing, Standards, Comparative Analysis, Guidelines
New Meridian Corporation, 2020
New Meridian Corporation has developed the "Quality Testing Standards and Criteria for Comparability Claims" (QTS). The goal of the QTS is to provide guidance to states that are interested in including content from the New Meridian item bank and intend to make comparability claims with "other assessments" that include New…
Descriptors: Testing, Standards, Comparative Analysis, Guidelines
Ayodele, Alicia Nicole – ProQuest LLC, 2017
Within polytomous items, differential item functioning (DIF) can take on various forms due to the number of response categories. The lack of invariance at this level is referred to as differential step functioning (DSF). The most common DSF methods in the literature are the adjacent category log odds ratio (AC-LOR) estimator and cumulative…
Descriptors: Statistical Analysis, Test Bias, Test Items, Scores
A Comparison of Four Differential Item Functioning Procedures in the Presence of Multidimensionality
Bastug, Özlem Yesim Özbek – Educational Research and Reviews, 2016
Differential item functioning (DIF), or item bias, is a relatively new concept. It has been one of the most controversial and the most studied subject in measurement theory. DIF occurs when people who have the same ability level but from different groups have a different probability of a correct response. According to Item Response Theory (IRT),…
Descriptors: Test Bias, Comparative Analysis, Item Response Theory, Regression (Statistics)