NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing 1 to 15 of 26 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Diaz, Emily; Brooks, Gordon; Johanson, George – International Journal of Assessment Tools in Education, 2021
This Monte Carlo study assessed Type I error in differential item functioning analyses using Lord's chi-square (LC), Likelihood Ratio Test (LRT), and Mantel-Haenszel (MH) procedure. Two research interests were investigated: item response theory (IRT) model specification in LC and the LRT and continuity correction in the MH procedure. This study…
Descriptors: Test Bias, Item Response Theory, Statistical Analysis, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Inal, Hatice; Anil, Duygu – Eurasian Journal of Educational Research, 2018
Purpose: This study aimed to examine the impact of differential item functioning in anchor items on the group invariance in test equating for different sample sizes. Within this scope, the factors chosen to investigate the group invariance in test equating were sample size, frequency of sample size of subgroups, differential form of differential…
Descriptors: Equated Scores, Test Bias, Test Items, Sample Size
Ayodele, Alicia Nicole – ProQuest LLC, 2017
Within polytomous items, differential item functioning (DIF) can take on various forms due to the number of response categories. The lack of invariance at this level is referred to as differential step functioning (DSF). The most common DSF methods in the literature are the adjacent category log odds ratio (AC-LOR) estimator and cumulative…
Descriptors: Statistical Analysis, Test Bias, Test Items, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate – Journal of Research on Educational Effectiveness, 2017
This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…
Descriptors: Observation, Educational Research, Standardized Tests, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hidalgo, Mª Dolores; Gómez-Benito, Juana; Zumbo, Bruno D. – Educational and Psychological Measurement, 2014
The authors analyze the effectiveness of the R[superscript 2] and delta log odds ratio effect size measures when using logistic regression analysis to detect differential item functioning (DIF) in dichotomous items. A simulation study was carried out, and the Type I error rate and power estimates under conditions in which only statistical testing…
Descriptors: Regression (Statistics), Test Bias, Effect Size, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kabasakal, Kübra Atalay; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2015
This study examines the effect of differential item functioning (DIF) items on test equating through multilevel item response models (MIRMs) and traditional IRMs. The performances of three different equating models were investigated under 24 different simulation conditions, and the variables whose effects were examined included sample size, test…
Descriptors: Test Bias, Equated Scores, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Oshima, T. C.; Wright, Keith; White, Nick – International Journal of Testing, 2015
Raju, van der Linden, and Fleer (1995) introduced a framework for differential functioning of items and tests (DFIT) for unidimensional dichotomous models. Since then, DFIT has been shown to be a quite versatile framework as it can handle polytomous as well as multidimensional models both at the item and test levels. However, DFIT is still limited…
Descriptors: Test Bias, Item Response Theory, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Awad, Germine H.; Patall, Erika A.; Rackley, Kadie R.; Reilly, Erin D. – Journal of Educational & Psychological Consultation, 2016
As the US continues to diversify, methods for accurately assessing human behavior must evolve. This paper offers multicultural research considerations at several stages of the research process for psychological research and consultation. Implications regarding the comparative research framework are discussed and suggestions are offered on how to…
Descriptors: Cultural Awareness, Psychological Studies, Control Groups, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Elosua, Paula; Wells, Craig – Psicologica: International Journal of Methodology and Experimental Psychology, 2013
The purpose of the present study was to compare the Type I error rate and power of two model-based procedures, the mean and covariance structure model (MACS) and the item response theory (IRT), and an observed-score based procedure, ordinal logistic regression, for detecting differential item functioning (DIF) in polytomous items. A simulation…
Descriptors: Test Bias, Test Items, Item Response Theory, Regression (Statistics)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Atalay Kabasakal, Kübra; Arsan, Nihan; Gök, Bilge; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2014
This simulation study compared the performances (Type I error and power) of Mantel-Haenszel (MH), SIBTEST, and item response theory-likelihood ratio (IRT-LR) methods under certain conditions. Manipulated factors were sample size, ability differences between groups, test length, the percentage of differential item functioning (DIF), and underlying…
Descriptors: Comparative Analysis, Item Response Theory, Statistical Analysis, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M.; Cai, Li; Wang, Mian – Educational and Psychological Measurement, 2013
Differential item functioning (DIF) occurs when the probability of responding in a particular category to an item differs for members of different groups who are matched on the construct being measured. The identification of DIF is important for valid measurement. This research evaluates an improved version of Lord's X[superscript 2] Wald test for…
Descriptors: Test Bias, Item Response Theory, Computation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D. – Educational and Psychological Measurement, 2013
The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…
Descriptors: Test Bias, Effect Size, Item Response Theory, Comparative Analysis
Liu, Qian – ProQuest LLC, 2011
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
Descriptors: Test Bias, Test Items, Statistical Analysis, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Yi-Hsuan; Zhang, Jinming – ETS Research Report Series, 2010
This report examines the consequences of differential item functioning (DIF) using simulated data. Its impact on total score, item response theory (IRT) ability estimate, and test reliability was evaluated in various testing scenarios created by manipulating the following four factors: test length, percentage of DIF items per form, sample sizes of…
Descriptors: Test Bias, Item Response Theory, Test Items, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Atar, Burcu; Kamata, Akihito – Hacettepe University Journal of Education, 2011
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Descriptors: Test Bias, Sample Size, Monte Carlo Methods, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2