NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Fikis, David R. J.; Oshima, T. C. – Educational and Psychological Measurement, 2017
Purification of the test has been a well-accepted procedure in enhancing the performance of tests for differential item functioning (DIF). As defined by Lord, purification requires reestimation of ability parameters after removing DIF items before conducting the final DIF analysis. IRTPRO 3 is a recently updated program for analyses in item…
Descriptors: Test Bias, Item Response Theory, Statistical Analysis, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Wright, Keith D.; Oshima, T. C. – Educational and Psychological Measurement, 2015
This study established an effect size measure for differential functioning for items and tests' noncompensatory differential item functioning (NCDIF). The Mantel-Haenszel parameter served as the benchmark for developing NCDIF's effect size measure for reporting moderate and large differential item functioning in test items. The effect size of…
Descriptors: Effect Size, Test Bias, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Oshima, T. C.; Wright, Keith; White, Nick – International Journal of Testing, 2015
Raju, van der Linden, and Fleer (1995) introduced a framework for differential functioning of items and tests (DFIT) for unidimensional dichotomous models. Since then, DFIT has been shown to be a quite versatile framework as it can handle polytomous as well as multidimensional models both at the item and test levels. However, DFIT is still limited…
Descriptors: Test Bias, Item Response Theory, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jihye; Oshima, T. C. – Educational and Psychological Measurement, 2013
In a typical differential item functioning (DIF) analysis, a significance test is conducted for each item. As a test consists of multiple items, such multiple testing may increase the possibility of making a Type I error at least once. The goal of this study was to investigate how to control a Type I error rate and power using adjustment…
Descriptors: Test Bias, Test Items, Statistical Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Snow, Teresa K.; Oshima, T. C. – Educational and Psychological Measurement, 2009
Oshima, Raju, and Flowers demonstrated the use of an item response theory-based technique for analyzing differential item function (DIF) and differential test function for dichotomously scored data that are intended to be multidimensional. Their study assumed that the number of intended-to-be measured dimensions was correctly identified. In…
Descriptors: Test Bias, Item Response Theory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Oshima, T. C.; Morris, S. B. – Educational Measurement: Issues and Practice, 2008
Nambury S. Raju (1937-2005) developed two model-based indices for differential item functioning (DIF) during his prolific career in psychometrics. Both methods, Raju's area measures (Raju, 1988) and Raju's DFIT (Raju, van der Linden, & Fleer, 1995), are based on quantifying the gap between item characteristic functions (ICFs). This approach…
Descriptors: Test Bias, Psychometrics, Methods, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Raju, Nambury S.; Fortmann-Johnson, Kristen A.; Kim, Wonsuk; Morris, Scott B.; Nering, Michael L.; Oshima, T. C. – Applied Psychological Measurement, 2009
The recent study of Oshima, Raju, and Nanda proposes the item parameter replication (IPR) method for assessing statistical significance of the noncompensatory differential item functioning (NCDIF) index within the differential functioning of items and tests (DFIT) framework. Previous Monte Carlo simulations have found that the appropriate cutoff…
Descriptors: Test Bias, Statistical Significance, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Flowers, Claudia P.; Oshima, T. C.; Raju, Nambury S. – Applied Psychological Measurement, 1999
Examined the polytomous differential functioning of items and tests (DFIT) framework proposed by N. Raju and others through simulation. Findings show that the DFIT framework is effective in identifying differential item functioning and differential test functioning. (SLD)
Descriptors: Identification, Item Bias, Models, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
McCarty, F. A.; Oshima, T. C.; Raju, Nambury S. – Applied Measurement in Education, 2007
Oshima, Raju, Flowers, and Slinde (1998) described procedures for identifying sources of differential functioning for dichotomous data using differential bundle functioning (DBF) derived from the differential functioning of items and test (DFIT) framework (Raju, van der Linden, & Fleer, 1995). The purpose of this study was to extend the…
Descriptors: Rating Scales, Test Bias, Scoring, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Oshima, T. C.; Raju, Nambury S.; Nanda, Alice O. – Journal of Educational Measurement, 2006
A new item parameter replication method is proposed for assessing the statistical significance of the noncompensatory differential item functioning (NCDIF) index associated with the differential functioning of items and tests framework. In this new method, a cutoff score for each item is determined by obtaining a (1-alpha ) percentile rank score…
Descriptors: Evaluation Methods, Statistical Distributions, Statistical Significance, Test Bias
Peer reviewed Peer reviewed
Oshima, T. C.; Raju, Nambury S. Rajo; Flowers, Claudia P. – Journal of Educational Measurement, 1997
Defines and demonstrates a framework for studying differential item functioning and differential test functioning for tests that are intended to be multidimensional. The procedure, which is illustrated with simulated data, is an extension of the unidimensional differential functioning of items and tests approach (N. Raju, W. van der Linden, and P.…
Descriptors: Item Bias, Item Response Theory, Models, Simulation