NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Iran21
Singapore1
United States1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mohammad Ahmadi Safa; Bahare Nasiri – Language Testing in Asia, 2025
Studies have confirmed that fair assessment practices in educational contexts affect learners' motivation, self-regulation, and above all teacher credibility, yet the concept has been subject to educational stakeholders' diverse outlooks and perspectives. On this basis, this study delves into the high school English as Foreign Language (EFL)…
Descriptors: Student Evaluation, High School Teachers, Evaluation Methods, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Bormanaki, Hamidreza Babaee; Ajideh, Parviz – Language Testing in Asia, 2022
This paper reports on an investigation of differential item functioning (DIF) in the Iranian Undergraduate University Entrance Special English Exam (IUUESEE) across four native language groups including the Azeri, the Persian, the Kurdish, and the Luri test takers via Rasch analysis. A total sample of 14,172 participants was selected for the…
Descriptors: Foreign Countries, Test Bias, Undergraduate Students, Native Language
Peer reviewed Peer reviewed
Direct linkDirect link
Mehrazmay, Roghayeh; Ghonsooly, Behzad; de la Torre, Jimmy – Applied Measurement in Education, 2021
The present study aims to examine gender differential item functioning (DIF) in the reading comprehension section of a high stakes test using cognitive diagnosis models. Based on the multiple-group generalized deterministic, noisy "and" gate (MG G-DINA) model, the Wald test and likelihood ratio test are used to detect DIF. The flagged…
Descriptors: Test Bias, College Entrance Examinations, Gender Differences, Reading Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Amirian, Seyed Mohammad Reza – International Journal of Language Testing, 2020
The purpose of the present study was two-fold: (a) First, it examined fairness of Special English Test (SET) of Iranian National University Entrance Exam (INUEE) by analyzing Differential Item Functioning (DIF) with reading comprehension section of this test (b) second, it explored test takers' attitudes towards possible sources of unfairness and…
Descriptors: Reading Comprehension, Test Bias, English for Special Purposes, Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sarallah Jafaripour; Omid Tabatabaei; Hadi Salehi; Hossein Vahid Dastjerdi – International Journal of Language Testing, 2024
The purpose of this study was to examine gender and discipline-based Differential Item Functioning (DIF) and Differential Distractor Functioning (DDF) on the Islamic Azad University English Proficiency Test (IAUEPT). The study evaluated DIF and DDF across genders and disciplines using the Rasch model. To conduct DIF and DDF analysis, the examinees…
Descriptors: Item Response Theory, Test Items, Language Tests, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Moghadam, M.; Nasirzadeh, F. – Language Testing in Asia, 2020
The present study tries to investigate the fairness of an English reading comprehension test employing Kunnan's (2004) test fairness framework (TFF) as the most comprehensive model available for test fairness. The participants of this study comprised 300 freshman students taking general English course chosen based on the availability sampling,…
Descriptors: Test Bias, Reading Tests, Reading Comprehension, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shahmirzadi, Niloufar – International Journal of Language Testing, 2023
The documentation of test takers' achievements has been accomplished through large-scale assessments to find general information about students' language ability. To remove subjectivity, Cognitive Diagnostic Assessment (CDA) has recently played a crucial role in perceiving candidates' latent attribute patterns to find multi-diagnostic information…
Descriptors: Placement Tests, Test Validity, Programming Languages, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ahmadi Shirazi, Masoumeh – SAGE Open, 2019
Threats to construct validity should be reduced to a minimum. If true, sources of bias, namely raters, items, tests as well as gender, age, race, language background, culture, and socio-economic status need to be spotted and removed. This study investigates raters' experience, language background, and the choice of essay prompt as potential…
Descriptors: Foreign Countries, Language Tests, Test Bias, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Pishghadam, Reza; Baghaei, Purya; Seyednozadi, Zahra – International Journal of Testing, 2017
This article attempts to present emotioncy as a potential source of test bias to inform the analysis of test item performance. Emotioncy is defined as a hierarchy, ranging from "exvolvement" (auditory, visual, and kinesthetic) to "involvement" (inner and arch), to emphasize the emotions evoked by the senses. This study…
Descriptors: Test Bias, Item Response Theory, Test Items, Psychological Patterns
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Alavi, Seyed Mohammad; Bordbar, Soodeh – Malaysian Online Journal of Educational Sciences, 2017
Differential Item Functioning (DIF) analysis is a key element in evaluating educational test fairness and validity. One of the frequently cited sources of construct-irrelevant variance is gender which has an important role in the university entrance exam; therefore, it causes bias and consequently undermines test validity. The present study aims…
Descriptors: Test Bias, High Stakes Tests, Gender Differences, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun – Applied Measurement in Education, 2018
The current simulation study examined the effects of Item Parameter Drift (IPD) occurring in a short scale on parameter estimates in multilevel models where scores from a scale were employed as a time-varying predictor to account for outcome scores. Five factors, including three decisions about IPD, were considered for simulation conditions. It…
Descriptors: Test Items, Hierarchical Linear Modeling, Predictor Variables, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ahmadi, Alireza; Bazvand, Ali Darabi – Iranian Journal of Language Teaching Research, 2016
Differential Item Functioning (DIF) exists when examinees of equal ability from different groups have different probabilities of successful performance in a certain item. This study examined gender differential item functioning across the PhD Entrance Exam of TEFL (PEET) in Iran, using both logistic regression (LR) and one-parameter item response…
Descriptors: Test Bias, Gender Bias, College Entrance Examinations, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sheybani, Elias; Zeraatpishe, Mitra – International Journal of Language Testing, 2018
Test method is deemed to affect test scores along with examinee ability (Bachman, 1996). In this research the role of method facet in reading comprehension tests is studied. Bachman divided method facet into five categories, one category is the nature of input and the nature of expected response. This study examined the role of method effect in…
Descriptors: Reading Comprehension, Reading Tests, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Ravand, Hamdollah – SAGE Open, 2019
In many reading comprehension tests, different test formats are employed. Two commonly used test formats to measure reading comprehension are sustained passages followed by some questions and cloze items. Individual differences in handling test format peculiarities could constitute a source of score variance. In this study, a bifactor Rasch model…
Descriptors: Cloze Procedure, Test Bias, Individual Differences, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ravand, Hamdollah – Practical Assessment, Research & Evaluation, 2015
Multilevel models (MLMs) are flexible in that they can be employed to obtain item and person parameters, test for differential item functioning (DIF) and capture both local item and person dependence. Papers on the MLM analysis of item response data have focused mostly on theoretical issues where applications have been add-ons to simulation…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Educational Testing, Reading Comprehension
Previous Page | Next Page ยป
Pages: 1  |  2