NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Education for All Handicapped…2
What Works Clearinghouse Rating
Showing 1 to 15 of 36 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Carmen Köhler; Lale Khorramdel; Artur Pokropek; Johannes Hartig – Journal of Educational Measurement, 2024
For assessment scales applied to different groups (e.g., students from different states; patients in different countries), multigroup differential item functioning (MG-DIF) needs to be evaluated in order to ensure that respondents with the same trait level but from different groups have equal response probabilities on a particular item. The…
Descriptors: Measures (Individuals), Test Bias, Models, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol – Educational Measurement: Issues and Practice, 2016
The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…
Descriptors: Test Bias, Research Methodology, Evaluation Methods, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xiaomin; Wang, Wen-Chung – Journal of Educational Measurement, 2015
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…
Descriptors: Test Bias, Models, Cognitive Measurement, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Hou, Likun; de la Torre, Jimmy; Nandakumar, Ratna – Journal of Educational Measurement, 2014
Analyzing examinees' responses using cognitive diagnostic models (CDMs) has the advantage of providing diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this article, the Wald test is proposed to examine DIF in the context of CDMs. This study…
Descriptors: Test Bias, Models, Simulation, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chuang; Kim, Do-Hong; Ng, Kok-Mun – Journal of Psychoeducational Assessment, 2012
This study examined the factorial and item-level invariance of Wong and Law's emotional intelligence scale (WLEIS) in a sample of 375 international students in U.S. universities. Confirmatory factor analysis (CFA) and differential item functioning (DIF) analysis were employed at the test and item level, respectively. International students from…
Descriptors: Factor Analysis, Emotional Intelligence, Test Bias, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2012
There is a lack of research on the effects of outliers on the decisions about the number of factors to retain in an exploratory factor analysis, especially for outliers arising from unintended and unknowingly included subpopulations. The purpose of the present research was to investigate how outliers from an unintended and unknowingly included…
Descriptors: Factor Analysis, Factor Structure, Evaluation Research, Evaluation Methods
Engelhard, George, Jr.; Wind, Stefanie A.; Kobrin, Jennifer L.; Chajewski, Michael – College Board, 2013
The purpose of this study is to illustrate the use of explanatory models based on Rasch measurement theory to detect systematic relationships between student and item characteristics and achievement differences using differential item functioning (DIF), differential group functioning (DGF), and differential person functioning (DPF) techniques. The…
Descriptors: Test Bias, Evaluation Methods, Measurement Techniques, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M.; Grimm, Kevin J. – Applied Psychological Measurement, 2011
In extant literature, multiple indicator multiple cause (MIMIC) models have been presented for identifying items that display uniform differential item functioning (DIF) only, not nonuniform DIF. This article addresses, for apparently the first time, the use of MIMIC models for testing both uniform and nonuniform DIF with categorical indicators. A…
Descriptors: Test Bias, Testing, Interaction, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Gomez, Rapson – Journal of Attention Disorders, 2012
Objective: Generalized partial credit model, which is based on item response theory (IRT), was used to test differential item functioning (DIF) for the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed.), inattention (IA), and hyperactivity/impulsivity (HI) symptoms across boys and girls. Method: To accomplish this, parents completed…
Descriptors: Rating Scales, Item Response Theory, Mental Disorders, Attention Deficit Hyperactivity Disorder
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Bottge, Brian A.; Cohen, Allan S.; Kim, Seock-Ho – Journal of Special Education, 2011
Current methods for detecting growth of students' problem-solving skills in math focus mainly on analyzing changes in test scores. Score-level analysis, however, may fail to reflect subtle changes that might be evident at the item level. This article demonstrates a method for studying item-level changes using data from a multiwave experiment with…
Descriptors: Test Bias, Group Membership, Mathematics Skills, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; Raiche, Gilles; Beland, Sebastien; Gerard, Paul – International Journal of Testing, 2011
We present an extension of the logistic regression procedure to identify dichotomous differential item functioning (DIF) in the presence of more than two groups of respondents. Starting from the usual framework of a single focal group, we propose a general approach to estimate the item response functions in each group and to test for the presence…
Descriptors: Language Skills, Identification, Foreign Countries, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien – Educational and Psychological Measurement, 2009
This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…
Descriptors: Test Items, Measures (Individuals), Test Bias, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Mapuranga, Raymond – International Journal of Testing, 2009
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Descriptors: Test Bias, Evaluation Methods, Test Items, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Koedel, Cory – Economics of Education Review, 2009
This paper examines whether educational production in secondary school involves joint production among teachers across subjects. In doing so, it also provides insights into the reliability of value-added modeling. Teacher value-added to reading test scores is estimated for four different teacher types: English, math, science and social-studies.…
Descriptors: Teacher Role, Reading Tests, English Teachers, Secondary School Teachers
Previous Page | Next Page »
Pages: 1  |  2  |  3