Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 11 |
Descriptor
Computer Software | 12 |
Test Bias | 12 |
Test Items | 12 |
Item Response Theory | 7 |
Statistical Analysis | 5 |
Foreign Countries | 4 |
Simulation | 4 |
Comparative Analysis | 3 |
Evaluation Methods | 3 |
Evaluation Research | 3 |
Measurement | 3 |
More ▼ |
Source
ProQuest LLC | 3 |
Educational and Psychological… | 2 |
Applied Psychological… | 1 |
Educational Technology &… | 1 |
International Journal of… | 1 |
Journal of Educational… | 1 |
Measurement:… | 1 |
Psicologica: International… | 1 |
Author
Benitez, Isabel | 1 |
Chang, Chi | 1 |
Cronje, Johannes C. | 1 |
Finch, Holmes | 1 |
Gattamorta, Karina A. | 1 |
Gomez-Benito, Juana | 1 |
He, Wei | 1 |
Hidalgo, M. Dolores | 1 |
Jiao, Hong | 1 |
Jones, Martha S. | 1 |
Koon, Sharon | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 7 |
Dissertations/Theses -… | 3 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Secondary Education | 3 |
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Canada | 1 |
Morocco | 1 |
South Africa | 1 |
Spain | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Zheng, Xiaying; Yang, Ji Seung – Measurement: Interdisciplinary Research and Perspectives, 2021
The purpose of this paper is to briefly introduce two most common applications of multiple group item response theory (IRT) models, namely detecting differential item functioning (DIF) analysis and nonequivalent group score linking with a simultaneous calibration. We illustrate how to conduct those analyses using the "Stata" item…
Descriptors: Item Response Theory, Test Bias, Computer Software, Statistical Analysis
Raykov, Tenko; Marcoulides, George A.; Lee, Chun-Lung; Chang, Chi – Educational and Psychological Measurement, 2013
This note is concerned with a latent variable modeling approach for the study of differential item functioning in a multigroup setting. A multiple-testing procedure that can be used to evaluate group differences in response probabilities on individual items is discussed. The method is readily employed when the aim is also to locate possible…
Descriptors: Test Bias, Statistical Analysis, Models, Hypothesis Testing
Wang, Wen-Chung; Shih, Ching-Lin; Sun, Guo-Wei – Educational and Psychological Measurement, 2012
The DIF-free-then-DIF (DFTD) strategy consists of two steps: (a) select a set of items that are the most likely to be DIF-free and (b) assess the other items for DIF (differential item functioning) using the designated items as anchors. The rank-based method together with the computer software IRTLRDIF can select a set of DIF-free polytomous items…
Descriptors: Test Bias, Test Items, Item Response Theory, Evaluation Methods
Padilla, Jose Luis; Hidalgo, M. Dolores; Benitez, Isabel; Gomez-Benito, Juana – Psicologica: International Journal of Methodology and Experimental Psychology, 2012
The analysis of differential item functioning (DIF) examines whether item responses differ according to characteristics such as language and ethnicity, when people with matching ability levels respond differently to the items. This analysis can be performed by calculating various statistics, one of the most important being the Mantel-Haenszel,…
Descriptors: Foreign Countries, Test Bias, Computer Software, Computer Software Evaluation
Jiao, Hong; Wang, Shudong; He, Wei – Journal of Educational Measurement, 2013
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Descriptors: Computation, Item Response Theory, Models, Monte Carlo Methods
Gattamorta, Karina A.; Penfield, Randall D.; Myers, Nicholas D. – International Journal of Testing, 2012
Measurement invariance is a common consideration in the evaluation of the validity and fairness of test scores when the tested population contains distinct groups of examinees, such as examinees receiving different forms of a translated test. Measurement invariance in polytomous items has traditionally been evaluated at the item-level,…
Descriptors: Foreign Countries, Psychometrics, Test Bias, Test Items
Wright, Keith D. – ProQuest LLC, 2011
Standardized testing has been part of the American educational system for decades. Controversy from the beginning has plagued standardized testing, is plaguing testing today, and will continue to be controversial. Given the current federal educational policies supporting increased standardized testing, psychometricians, educators and policy makers…
Descriptors: Test Bias, Test Items, Simulation, Testing
Koon, Sharon – ProQuest LLC, 2010
This study examined the effectiveness of the odds-ratio method (Penfield, 2008) and the multinomial logistic regression method (Kato, Moen, & Thurlow, 2009) for measuring differential distractor functioning (DDF) effects in comparison to the standardized distractor analysis approach (Schmitt & Bleistein, 1987). Students classified as participating…
Descriptors: Test Bias, Test Items, Reference Groups, Lunch Programs
MacInnes, Jann Marie Wise – ProQuest LLC, 2009
Multilevel data often exist in educational studies. The focus of this study is to consider differential item functioning (DIF) for dichotomous items from a multilevel perspective. One of the most often used methods for detecting DIF in dichotomously scored items is the Mantel-Haenszel log odds-ratio. However, the Mantel-Haenszel reduces the…
Descriptors: Test Bias, Simulation, Item Response Theory, Test Items
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Marks, Anthony M.; Cronje, Johannes C. – Educational Technology & Society, 2008
Computer-based assessments are becoming more commonplace, perhaps as a necessity for faculty to cope with large class sizes. These tests often occur in large computer testing venues in which test security may be compromised. In an attempt to limit the likelihood of cheating in such venues, randomised presentation of items is automatically…
Descriptors: Educational Assessment, Educational Testing, Research Needs, Test Items
Jones, Martha S.; Vredevoogd, Janet – 1984
This study was designed to determine item bias by sex on a test of spatial visualization administered to middle school students. Two chi square techniques were used to assess item bias: the Scheuneman C2 and the Camilli chi square. The direction of bias was indicated by assigning positive signs to the chi square components contributed by one group…
Descriptors: Cognitive Tests, Computer Software, Junior High Schools, Middle Schools