Publication Date
| In 2026 | 0 |
| Since 2025 | 34 |
| Since 2022 (last 5 years) | 221 |
| Since 2017 (last 10 years) | 566 |
| Since 2007 (last 20 years) | 1373 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 110 |
| Practitioners | 107 |
| Teachers | 46 |
| Administrators | 25 |
| Policymakers | 24 |
| Counselors | 12 |
| Parents | 7 |
| Students | 7 |
| Support Staff | 4 |
| Community | 2 |
Location
| California | 61 |
| Canada | 60 |
| United States | 57 |
| Turkey | 47 |
| Australia | 43 |
| Florida | 34 |
| Germany | 26 |
| Texas | 26 |
| China | 25 |
| Netherlands | 25 |
| Iran | 22 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Farshad Effatpanah; Purya Baghaei; Hamdollah Ravand; Olga Kunina-Habenicht – International Journal of Testing, 2025
This study applied the Mixed Rasch Model (MRM) to the listening comprehension section of the International English Language Testing System (IELTS) to detect latent class differential item functioning (DIF) by exploring multiple profiles of second/foreign language listeners. Item responses of 462 examinees to an IELTS listening test were subjected…
Descriptors: Item Response Theory, Second Language Learning, Listening Comprehension, English (Second Language)
Minghui Yao; Yunjie Xu – Sociological Methods & Research, 2024
As a crucial method in organizational and social behavior research, self-report surveys must manage method bias. Method biases are distorted scores in survey response, distorted variance in variables, and distorted relational estimates between variables caused by method designs. Studies on method bias have focused on "post hoc"…
Descriptors: Statistical Bias, Social Science Research, Questionnaires, Test Bias
Tatiana Artamonova; Maria Hasler-Barker; Edna Velásquez – Journal of Latinos and Education, 2024
This paper discusses Texas Examinations of Educator Standards Program Languages Other Than English -- Spanish exam (TExES LOTE - Spanish) and its potential bias, particularly against teacher candidates with Spanish as a heritage language (HL) background. In Texas, most teacher candidates, or college students of Spanish preparing for secondary…
Descriptors: Language Tests, Test Bias, Spanish, Native Language
Hwanggyu Lim; Danqi Zhu; Edison M. Choe; Kyung T. Han – Journal of Educational Measurement, 2024
This study presents a generalized version of the residual differential item functioning (RDIF) detection framework in item response theory, named GRDIF, to analyze differential item functioning (DIF) in multiple groups. The GRDIF framework retains the advantages of the original RDIF framework, such as computational efficiency and ease of…
Descriptors: Item Response Theory, Test Bias, Test Reliability, Test Construction
Randall, Jennifer – Educational Assessment, 2023
In a justice-oriented antiracist assessment process, attention to the disruption of white supremacy must occur at every stage--from construct articulation to score reporting. An important step in the assessment development process is the item review stage often referred to as Bias/Fairness and Sensitivity Review. I argue that typical approaches to…
Descriptors: Social Justice, Racism, Test Bias, Test Items
Weese, James D.; Turner, Ronna C.; Ames, Allison; Crawford, Brandon; Liang, Xinya – Educational and Psychological Measurement, 2022
A simulation study was conducted to investigate the heuristics of the SIBTEST procedure and how it compares with ETS classification guidelines used with the Mantel-Haenszel procedure. Prior heuristics have been used for nearly 25 years, but they are based on a simulation study that was restricted due to computer limitations and that modeled item…
Descriptors: Test Bias, Heuristics, Classification, Statistical Analysis
Dimitrov, Dimiter M.; Atanasov, Dimitar V. – Educational and Psychological Measurement, 2022
This study offers an approach to testing for differential item functioning (DIF) in a recently developed measurement framework, referred to as "D"-scoring method (DSM). Under the proposed approach, called "P-Z" method of testing for DIF, the item response functions of two groups (reference and focal) are compared by…
Descriptors: Test Bias, Methods, Test Items, Scoring
James D. Weese; Ronna C. Turner; Allison Ames; Xinya Liang; Brandon Crawford – Journal of Experimental Education, 2024
In this study a standardized effect size was created for use with the SIBTEST procedure. Using this standardized effect size, a single set of heuristics was developed that are appropriate for data fitting different item response models (e.g., 2-parameter logistic, 3-parameter logistic). The standardized effect size rescales the raw beta-uni value…
Descriptors: Test Bias, Test Items, Item Response Theory, Effect Size
Gregory J. Crowther; Benjamin L. Wiggins – Journal of Microbiology & Biology Education, 2024
Students in STEM know well the stress, challenge, and effort that accompany college exams. As a widely recognizable feature of the STEM classroom experience, high-stakes assessments serve as crucial cultural gateways in shaping both preparation and motivation for careers. In this essay, we identify and discuss issues of power around STEM exams to…
Descriptors: STEM Education, High Stakes Tests, Test Bias, Power Structure
Christin Rickman – ProQuest LLC, 2024
This dissertation examines the landmark case Larry P. v. Riles and its impact on addressing the disproportionality and overrepresentation of Black and/or African American students in special education within California. Despite the court's ruling, which prohibited the use of IQ tests for Black students for special education placement due to…
Descriptors: Special Education, African American Students, Racial Discrimination, Alternative Assessment
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Chalmers, R. Philip – Journal of Educational Measurement, 2023
Several marginal effect size (ES) statistics suitable for quantifying the magnitude of differential item functioning (DIF) have been proposed in the area of item response theory; for instance, the Differential Functioning of Items and Tests (DFIT) statistics, signed and unsigned item difference in the sample statistics (SIDS, UIDS, NSIDS, and…
Descriptors: Test Bias, Item Response Theory, Definitions, Monte Carlo Methods
Veronica Y. Kang; Sunyoung Kim; Emily V. Gregori; Daniel M. Maggin; Jason C. Chow; Hongyang Zhao – Journal of Speech, Language, and Hearing Research, 2025
Purpose: Early language intervention is essential for children with indicators of language delay. Enhanced milieu teaching (EMT) is a naturalistic intervention that supports the language development of children with emerging language. We conducted a systematic review and meta-analysis of all qualifying single-case and group design studies that…
Descriptors: Literature Reviews, Meta Analysis, Early Intervention, Response to Intervention
Yousef Abdelqader Abu shindi; Muna Abdullah Al-Bahrani – Psychology in the Schools, 2025
The current study examined the Career Thoughts Inventory (CTI (psychometric properties and its performance among a sample of 2366 adolescents; 1037 (45.4%) males and 1289 (54.5%) females. Item Response Theory (IRT) was applied to identify which CTI items proficiently contribute to a single proper measurement of CTI. IRT evaluates the amount of…
Descriptors: Adolescents, Foreign Countries, Measures (Individuals), Item Response Theory
Martijn Schoenmakers; Jesper Tijmstra; Jeroen Vermunt; Maria Bolsinova – Educational and Psychological Measurement, 2024
Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questionnaire results. For this reason, various item response theory (IRT) models have been proposed to model ERS and correct for it. Comparisons of these…
Descriptors: Item Response Theory, Response Style (Tests), Models, Likert Scales

Peer reviewed
Direct link
