Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 14 |
Descriptor
Computation | 19 |
Evaluation Methods | 19 |
Test Bias | 19 |
Simulation | 7 |
Item Response Theory | 6 |
Research Methodology | 6 |
Test Items | 6 |
Scores | 5 |
Statistical Analysis | 5 |
Models | 4 |
Comparative Analysis | 3 |
More ▼ |
Source
Author
Beland, Sebastien | 1 |
Boruch, Robert | 1 |
Braun, Henry | 1 |
Burke, Mary A. | 1 |
Camilli, Gregory | 1 |
Chenchen Ma | 1 |
Cheung, Mike W. L. | 1 |
Chiu, Ting-Wei | 1 |
Cho, Sun-Joo | 1 |
Chun Wang | 1 |
Croy, Calvin D. | 1 |
More ▼ |
Publication Type
Journal Articles | 17 |
Reports - Research | 9 |
Reports - Descriptive | 5 |
Reports - Evaluative | 5 |
Education Level
Grade 4 | 3 |
Elementary Education | 2 |
Higher Education | 2 |
Elementary Secondary Education | 1 |
Grade 10 | 1 |
Grade 3 | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
More ▼ |
Audience
Location
Canada | 1 |
Florida | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Chenchen Ma; Jing Ouyang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Survey instruments and assessments are frequently used in many domains of social science. When the constructs that these assessments try to measure become multifaceted, multidimensional item response theory (MIRT) provides a unified framework and convenient statistical tool for item analysis, calibration, and scoring. However, the computational…
Descriptors: Algorithms, Item Response Theory, Scoring, Accuracy
Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol – Educational Measurement: Issues and Practice, 2016
The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…
Descriptors: Test Bias, Research Methodology, Evaluation Methods, Models
Dimitrov, Dimiter M. – Measurement and Evaluation in Counseling and Development, 2017
This article offers an approach to examining differential item functioning (DIF) under its item response theory (IRT) treatment in the framework of confirmatory factor analysis (CFA). The approach is based on integrating IRT- and CFA-based testing of DIF and using bias-corrected bootstrap confidence intervals with a syntax code in Mplus.
Descriptors: Test Bias, Item Response Theory, Factor Analysis, Evaluation Methods
Assessment of Differential Item Functioning under Cognitive Diagnosis Models: The DINA Model Example
Li, Xiaomin; Wang, Wen-Chung – Journal of Educational Measurement, 2015
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…
Descriptors: Test Bias, Models, Cognitive Measurement, Evaluation Methods
Rutkowski, Leslie; Rutkowski, David; Zhou, Yan – International Journal of Testing, 2016
Using an empirically-based simulation study, we show that typically used methods of choosing an item calibration sample have significant impacts on achievement bias and system rankings. We examine whether recent PISA accommodations, especially for lower performing participants, can mitigate some of this bias. Our findings indicate that standard…
Descriptors: Simulation, International Programs, Adolescents, Student Evaluation
Magis, David; Raiche, Gilles; Beland, Sebastien; Gerard, Paul – International Journal of Testing, 2011
We present an extension of the logistic regression procedure to identify dichotomous differential item functioning (DIF) in the presence of more than two groups of respondents. Starting from the usual framework of a single focal group, we propose a general approach to estimate the item response functions in each group and to test for the presence…
Descriptors: Language Skills, Identification, Foreign Countries, Evaluation Methods
Woods, Carol M. – Applied Psychological Measurement, 2011
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another, irrespective of true group-mean differences on the constructs being measured. This article is focused on item response theory based likelihood ratio testing for DIF (IRT-LR or…
Descriptors: Simulation, Item Response Theory, Testing, Questionnaires
Camilli, Gregory; Prowker, Adam; Dossey, John A.; Lindquist, Mary M.; Chiu, Ting-Wei; Vargas, Sadako; de la Torre, Jimmy – Journal of Educational Measurement, 2008
A new method for analyzing differential item functioning is proposed to investigate the relative strengths and weaknesses of multiple groups of examinees. Accordingly, the notion of a conditional measure of difference between two groups (Reference and Focal) is generalized to a conditional variance. The objective of this article is to present and…
Descriptors: Test Bias, National Competency Tests, Grade 4, Difficulty Level
Wyse, Adam E.; Mapuranga, Raymond – International Journal of Testing, 2009
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Descriptors: Test Bias, Evaluation Methods, Test Items, Educational Assessment
Yoo, Jin Eun – Educational and Psychological Measurement, 2009
This Monte Carlo study investigates the beneficiary effect of including auxiliary variables during estimation of confirmatory factor analysis models with multiple imputation. Specifically, it examines the influence of sample size, missing rates, missingness mechanism combinations, missingness types (linear or convex), and the absence or presence…
Descriptors: Monte Carlo Methods, Research Methodology, Test Validity, Factor Analysis
Braun, Henry; Zhang, Jinming; Vezzu, Sailesh – ETS Research Report Series, 2008
At present, although the percentages of students with disabilities (SDs) and/or students who are English language learners (ELL) excluded from a NAEP administration are reported, no statistical adjustment is made for these excluded students in the calculation of NAEP results. However, the exclusion rates for both SD and ELL students vary…
Descriptors: Research Methodology, Computation, Disabilities, English Language Learners
Sivo, Stephen; Fan, Xitao; Witta, Lea – Structural Equation Modeling: A Multidisciplinary Journal, 2005
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
Descriptors: Structural Equation Models, Interaction, Correlation, Test Bias
Boruch, Robert – New Directions for Evaluation, 2007
Thomas Jefferson recognized the value of reason and scientific experimentation in the eighteenth century. This chapter extends the idea in contemporary ways to standards that may be used to judge the ethical propriety of randomized trials and the dependability of evidence on effects of social interventions.
Descriptors: Ethics, Standards, Evaluation Methods, Research Methodology
Raykov, Tenko – Structural Equation Modeling: A Multidisciplinary Journal, 2005
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
Descriptors: Computation, Goodness of Fit, Test Bias, Statistical Analysis
Kim, Jee-Seon; Frees, Edward W. – Psychometrika, 2006
Statistical methodology for handling omitted variables is presented in a multilevel modeling framework. In many nonexperimental studies, the analyst may not have access to all requisite variables, and this omission may lead to biased estimates of model parameters. By exploiting the hierarchical nature of multilevel data, a battery of statistical…
Descriptors: Simulation, Social Sciences, Structural Equation Models, Computation
Previous Page | Next Page ยป
Pages: 1 | 2