NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sarallah Jafaripour; Omid Tabatabaei; Hadi Salehi; Hossein Vahid Dastjerdi – International Journal of Language Testing, 2024
The purpose of this study was to examine gender and discipline-based Differential Item Functioning (DIF) and Differential Distractor Functioning (DDF) on the Islamic Azad University English Proficiency Test (IAUEPT). The study evaluated DIF and DDF across genders and disciplines using the Rasch model. To conduct DIF and DDF analysis, the examinees…
Descriptors: Item Response Theory, Test Items, Language Tests, Language Proficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shahmirzadi, Niloufar – International Journal of Language Testing, 2023
The documentation of test takers' achievements has been accomplished through large-scale assessments to find general information about students' language ability. To remove subjectivity, Cognitive Diagnostic Assessment (CDA) has recently played a crucial role in perceiving candidates' latent attribute patterns to find multi-diagnostic information…
Descriptors: Placement Tests, Test Validity, Programming Languages, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Ravand, Hamdollah – SAGE Open, 2019
In many reading comprehension tests, different test formats are employed. Two commonly used test formats to measure reading comprehension are sustained passages followed by some questions and cloze items. Individual differences in handling test format peculiarities could constitute a source of score variance. In this study, a bifactor Rasch model…
Descriptors: Cloze Procedure, Test Bias, Individual Differences, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Banerjee, Jayanti; Papageorgiou, Spiros – International Journal of Listening, 2016
The research reported in this article investigates differential item functioning (DIF) in a listening comprehension test. The study explores the relationship between test-taker age and the items' language domains across multiple test forms. The data comprise test-taker responses (N = 2,861) to a total of 133 unique items, 46 items of which were…
Descriptors: Correlation, High Stakes Tests, Test Items, Listening Comprehension Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Turkan, Sultan; Liu, Ou Lydia – International Journal of Science Education, 2012
The performance of English language learners (ELLs) has been a concern given the rapidly changing demographics in US K-12 education. This study aimed to examine whether students' English language status has an impact on their inquiry science performance. Differential item functioning (DIF) analysis was conducted with regard to ELL status on an…
Descriptors: Science Tests, English (Second Language), Second Language Learning, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Aryadoust, Vahid – International Journal of Listening, 2012
This article investigates a version of the International English Language Testing System (IELTS) listening test for evidence of differential item functioning (DIF) based on gender, nationality, age, and degree of previous exposure to the test. Overall, the listening construct was found to be underrepresented, which is probably an important cause…
Descriptors: Evidence, Test Bias, Testing, Listening Comprehension Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Abbott, Marilyn L. – Language Testing, 2007
In this article, I describe a practical application of the Roussos and Stout (1996) multidimensional analysis framework for interpreting group performance differences on an ESL reading proficiency test. Although a variety of statistical methods have been developed for flagging test items that function differentially for equal ability examinees…
Descriptors: Test Bias, Test Items, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Ross, Steven J.; Okabe, Junko – International Journal of Testing, 2006
Test validity is predicated on there being a lack of bias in tasks, items, or test content. It is well-known that factors such as test candidates' mother tongue, life experiences, and socialization practices of the wider community may serve to inject subtle interactions between individuals' background and the test content. When the gender of the…
Descriptors: Gender Bias, Language Tests, Test Validity, Reading Comprehension
Brutten, Sheila R.; And Others – 1987
In a study of the detection of test item bias, Chinese speakers and Spanish speakers were administered a measure of pronunciation accuracy for the (d) and (z) morphemes. Six indices were calculated for each item: item difficulty, point biserial correlation coefficient, item variance, chi-square, Rasch logit of difficulty, and the average…
Descriptors: Articulation (Speech), Chinese, Comparative Analysis, Correlation