NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raju, Nambury S.; Fortmann-Johnson, Kristen A.; Kim, Wonsuk; Morris, Scott B.; Nering, Michael L.; Oshima, T. C. – Applied Psychological Measurement, 2009
The recent study of Oshima, Raju, and Nanda proposes the item parameter replication (IPR) method for assessing statistical significance of the noncompensatory differential item functioning (NCDIF) index within the differential functioning of items and tests (DFIT) framework. Previous Monte Carlo simulations have found that the appropriate cutoff…
Descriptors: Test Bias, Statistical Significance, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Flowers, Claudia P.; Oshima, T. C.; Raju, Nambury S. – Applied Psychological Measurement, 1999
Examined the polytomous differential functioning of items and tests (DFIT) framework proposed by N. Raju and others through simulation. Findings show that the DFIT framework is effective in identifying differential item functioning and differential test functioning. (SLD)
Descriptors: Identification, Item Bias, Models, Test Bias
Peer reviewed Peer reviewed
Raju, Nambury S.; And Others – Applied Psychological Measurement, 1995
Internal measures of differential functioning of items and tests (DFIT) based on item response theory (IRT) are proposed. The new differential test functioning index leads to noncompensatory DIF indices. Monte Carlo studies demonstrate that these indices are accurate in assessing DIF. (SLD)
Descriptors: Item Response Theory, Monte Carlo Methods, Test Bias, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
McCarty, F. A.; Oshima, T. C.; Raju, Nambury S. – Applied Measurement in Education, 2007
Oshima, Raju, Flowers, and Slinde (1998) described procedures for identifying sources of differential functioning for dichotomous data using differential bundle functioning (DBF) derived from the differential functioning of items and test (DFIT) framework (Raju, van der Linden, & Fleer, 1995). The purpose of this study was to extend the…
Descriptors: Rating Scales, Test Bias, Scoring, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Oshima, T. C.; Raju, Nambury S.; Nanda, Alice O. – Journal of Educational Measurement, 2006
A new item parameter replication method is proposed for assessing the statistical significance of the noncompensatory differential item functioning (NCDIF) index associated with the differential functioning of items and tests framework. In this new method, a cutoff score for each item is determined by obtaining a (1-alpha ) percentile rank score…
Descriptors: Evaluation Methods, Statistical Distributions, Statistical Significance, Test Bias
Peer reviewed Peer reviewed
Raju, Nambury S. – Psychometrika, 1988
Formulas for computing the exact signed and unsigned areas between two item characteristic curves (ICCs) are presented. It is further shown that when the "c" parameters are unequal, the area between two ICCs is infinite. The significance of the exact area measures for item bias research is discussed. (Author)
Descriptors: Equations (Mathematics), Estimation (Mathematics), Item Analysis, Latent Trait Theory
Peer reviewed Peer reviewed
Devine, Patrick J.; Raju, Nambury S. – Educational and Psychological Measurement, 1982
Four methods of item bias detection--transformed item difficulty, item discrimination expressed as Clemans' lambda, chi-square, and the three-parameter item characteristic curve--were studied to determine the degree of correspondence among them in identifying biased and unbiased items in reading and mathematics subtests of the 1978 SRA Achievement…
Descriptors: Correlation, Difficulty Level, Item Analysis, Latent Trait Theory
Ellis, Barbara B.; Raju, Nambury S. – 2003
This chapter briefly describes some of the methods that test developers and psychometricians have devised to identify item and test bias and some of the challenges they still face. Although it may not be reasonable for classroom teachers to use these methods on a day-to-day basis in constructing tests, the authors argue that it is important for…
Descriptors: Academic Achievement, Educational Assessment, Educational Testing, Evaluation Methods
Peer reviewed Peer reviewed
Raju, Nambury S.; Normand, Jacques – Educational and Psychological Measurement, 1985
The Regression Model, popular in selection bias research, is proposed for use in item bias detection, providing a common framework for both types of bias. An empirical test of this new method, now called the Regression Bias method, and a comparison with other commonly used item bias detection methods are presented. (Author/BS)
Descriptors: Achievement Tests, Intermediate Grades, Item Analysis, Junior High Schools
Peer reviewed Peer reviewed
Raju, Nambury S.; And Others – Applied Psychological Measurement, 1991
A two-parameter logistic regression model for personnel selection is proposed. The model was tested with a database of 84,808 military enlistees. The probability of job success was related directly to trait levels, addressing such topics as selection, validity generalization, employee classification, selection bias, and utility-based fair…
Descriptors: Classification, Equations (Mathematics), Job Performance, Mathematical Models
Peer reviewed Peer reviewed
Raju, Nambury S.; And Others – Applied Measurement in Education, 1989
The effects of number of score groups and inclusion/exclusion of the studied item were examined in an empirical evaluation of the Mantel-Haenszel technique (MHT), using 3,795 elementary school students who took the SRA vocabulary test. Inclusion of four or more score groups yielded stable alpha estimates with the MHT. (SLD)
Descriptors: Black Students, Elementary Education, Elementary School Students, Hispanic Americans