NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Roschmann, Sarina; Witmer, Sara E.; Volker, Martin A. – International Journal of Testing, 2021
Accommodations are commonly provided to address language-related barriers students may experience during testing. Research on the validity of scores from accommodated test administrations remains somewhat inconclusive. The current study investigated item response patterns to understand whether accommodations, as used in practice among English…
Descriptors: Testing Accommodations, English Language Learners, Scores, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Furlow, Carolyn F.; Ross, Terris Raiford; Gagne, Phill – Applied Psychological Measurement, 2009
Douglas, Roussos, and Stout introduced the concept of differential bundle functioning (DBF) for identifying the underlying causes of differential item functioning (DIF). In this study, reference group was simulated to have higher mean ability than the focal group on a nuisance dimension, resulting in DIF for each of the multidimensional items…
Descriptors: Test Bias, Test Items, Reference Groups, Simulation
Koon, Sharon – ProQuest LLC, 2010
This study examined the effectiveness of the odds-ratio method (Penfield, 2008) and the multinomial logistic regression method (Kato, Moen, & Thurlow, 2009) for measuring differential distractor functioning (DDF) effects in comparison to the standardized distractor analysis approach (Schmitt & Bleistein, 1987). Students classified as participating…
Descriptors: Test Bias, Test Items, Reference Groups, Lunch Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam; Moses, Timothy P.; Yu, Lei; Dorans, Neil J. – Journal of Educational Measurement, 2009
This study examined the extent to which log-linear smoothing could improve the accuracy of differential item functioning (DIF) estimates in small samples of examinees. Examinee responses from a certification test were analyzed using White examinees in the reference group and African American examinees in the focal group. Using a simulation…
Descriptors: Test Items, Reference Groups, Testing Programs, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Klockars, Alan J.; Lee, Yoonsun – Journal of Educational Measurement, 2008
Monte Carlo simulations with 20,000 replications are reported to estimate the probability of rejecting the null hypothesis regarding DIF using SIBTEST when there is DIF present and/or when impact is present due to differences on the primary dimension to be measured. Sample sizes are varied from 250 to 2000 and test lengths from 10 to 40 items.…
Descriptors: Test Bias, Test Length, Reference Groups, Probability
Peer reviewed Peer reviewed
Clauser, Brian; And Others – Journal of Educational Measurement, 1994
The effect of reducing the number of score groups in the matching criterion of the Mantel-Haenszel procedure when screening for differential item functioning was investigated with a simulated data set. Results suggest that more than modest reductions cannot be recommended when ability distributions of reference and focal groups differ. (SLD)
Descriptors: Ability, Experimental Groups, Item Bias, Reference Groups
Peer reviewed Peer reviewed
Smith, Richard M. – Educational and Psychological Measurement, 1994
Simulated data are used to assess the appropriateness of using separate calibration and between-fit approaches to detecting item bias in the Rasch rating scale model. Results indicate that Type I error rates for the null distribution hold even when there are different ability levels for reference and focal groups. (SLD)
Descriptors: Ability, Goodness of Fit, Identification, Item Bias
Ito, Kyoko; Sykes, Robert C. – 1994
Responses to previously calibrated items administered in a computerized adaptive testing (CAT) mode may be used to recalibrate the items. This live-data simulation study investigated the possibility, and limitations, of on-line adaptive recalibration of precalibrated items. Responses to items of a Rasch-based paper-and-pencil licensure examination…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level