Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 6 |
Descriptor
Difficulty Level | 6 |
Test Items | 6 |
Test Bias | 5 |
Models | 3 |
Scores | 2 |
Achievement Tests | 1 |
Bayesian Statistics | 1 |
Bias | 1 |
Comparative Analysis | 1 |
Computation | 1 |
Correlation | 1 |
More ▼ |
Source
Journal of Educational… | 6 |
Author
Camilli, Gregory | 2 |
Prowker, Adam | 2 |
Bolt, Daniel M. | 1 |
Chiu, Ting-Wei | 1 |
De Boeck, Paul | 1 |
DeCarlo, Lawrence T. | 1 |
Dossey, John A. | 1 |
Frederickx, Sofie | 1 |
Karabatsos, George | 1 |
Liao, Xiangyi | 1 |
Lindquist, Mary M. | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Descriptive | 6 |
Education Level
Grade 4 | 1 |
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Bolt, Daniel M.; Liao, Xiangyi – Journal of Educational Measurement, 2021
We revisit the empirically observed positive correlation between DIF and difficulty studied by Freedle and commonly seen in tests of verbal proficiency when comparing populations of different mean latent proficiency levels. It is shown that a positive correlation between DIF and difficulty estimates is actually an expected result (absent any true…
Descriptors: Test Bias, Difficulty Level, Correlation, Verbal Tests
DeCarlo, Lawrence T. – Journal of Educational Measurement, 2021
In a signal detection theory (SDT) approach to multiple choice exams, examinees are viewed as choosing, for each item, the alternative that is perceived as being the most plausible, with perceived plausibility depending in part on whether or not an item is known. The SDT model is a process model and provides measures of item difficulty, item…
Descriptors: Perception, Bias, Theories, Test Items
Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David – Journal of Educational Measurement, 2010
In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…
Descriptors: Test Bias, Models, Test Items, Difficulty Level
Muckle, Timothy J.; Karabatsos, George – Journal of Educational Measurement, 2009
It is known that the Rasch model is a special two-level hierarchical generalized linear model (HGLM). This article demonstrates that the many-faceted Rasch model (MFRM) is also a special case of the two-level HGLM, with a random intercept representing examinee ability on a test, and fixed effects for the test items, judges, and possibly other…
Descriptors: Test Items, Item Response Theory, Models, Regression (Statistics)
Camilli, Gregory; Prowker, Adam; Dossey, John A.; Lindquist, Mary M.; Chiu, Ting-Wei; Vargas, Sadako; de la Torre, Jimmy – Journal of Educational Measurement, 2008
A new method for analyzing differential item functioning is proposed to investigate the relative strengths and weaknesses of multiple groups of examinees. Accordingly, the notion of a conditional measure of difference between two groups (Reference and Focal) is generalized to a conditional variance. The objective of this article is to present and…
Descriptors: Test Bias, National Competency Tests, Grade 4, Difficulty Level
Prowker, Adam; Camilli, Gregory – Journal of Educational Measurement, 2007
The central idea of differential item functioning (DIF) is to examine differences between two groups at the item level while controlling for overall proficiency. This approach is useful for examining hypotheses at a finer-grain level than are permitted by a total test score. The methodology proposed in this paper is also aimed at estimating…
Descriptors: Scores, Test Bias, Difficulty Level, Test Items