Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Test Items | 8 |
Item Response Theory | 6 |
Simulation | 5 |
Evaluation Methods | 3 |
Foreign Countries | 3 |
Models | 3 |
Computation | 2 |
Monte Carlo Methods | 2 |
Statistical Analysis | 2 |
Test Bias | 2 |
Test Length | 2 |
More ▼ |
Author
Wang, Wen-Chung | 8 |
Su, Ya-Hui | 2 |
Chen, Cheng-Te | 1 |
Cheng, Ying-Yao | 1 |
Jin, Kuan-Yu | 1 |
Liu, Chen-Wei | 1 |
Wilson, Mark | 1 |
Publication Type
Reports - Evaluative | 8 |
Journal Articles | 7 |
Education Level
Audience
Location
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wang, Wen-Chung; Jin, Kuan-Yu – Educational and Psychological Measurement, 2010
In this study, the authors extend the standard item response model with internal restrictions on item difficulty (MIRID) to fit polytomous items using cumulative logits and adjacent-category logits. Moreover, the new model incorporates discrimination parameters and is rooted in a multilevel framework. It is a nonlinear mixed model so that existing…
Descriptors: Difficulty Level, Test Items, Item Response Theory, Generalization
Wang, Wen-Chung; Liu, Chen-Wei – Educational and Psychological Measurement, 2011
The generalized graded unfolding model (GGUM) has been recently developed to describe item responses to Likert items (agree-disagree) in attitude measurement. In this study, the authors (a) developed two item selection methods in computerized classification testing under the GGUM, the current estimate/ability confidence interval method and the cut…
Descriptors: Computer Assisted Testing, Adaptive Testing, Classification, Item Response Theory
Wang, Wen-Chung – 1998
The conventional two-group differential item functioning (DIF) analysis is extended to an analysis of variance-like (ANOVA-like) DIF analysis where multiple factors with multiple groups are compared simultaneously. Moreover, DIF is treated as a parameter to be estimated rather than simply a sign to be detected. This proposed approach allows the…
Descriptors: Analysis of Variance, Foreign Countries, Item Bias, Item Response Theory
Wang, Wen-Chung; Su, Ya-Hui – Applied Measurement in Education, 2004
In this study we investigated the effects of the average signed area (ASA) between the item characteristic curves of the reference and focal groups and three test purification procedures on the uniform differential item functioning (DIF) detection via the Mantel-Haenszel (M-H) method through Monte Carlo simulations. The results showed that ASA,…
Descriptors: Test Bias, Student Evaluation, Evaluation Methods, Test Items

Wang, Wen-Chung – Journal of Applied Measurement, 2000
Proposes a factorial procedure for investigating differential distractor functioning in multiple choice items that models each distractor with a distinct distractibility parameter. Results of a simulation study show that the parameters of the proposed modeling were recovered very well. Analysis of 10 4-choice items from a college entrance…
Descriptors: College Entrance Examinations, Distractors (Tests), Factor Structure, Foreign Countries
Su, Ya-Hui; Wang, Wen-Chung – Applied Measurement in Education, 2005
Simulations were conducted to investigate factors that influence the Mantel, generalized Mantel-Haenszel (GMH), and logistic discriminant function analysis (LDFA) methods in assessing differential item functioning (DIF) for polytomous items. The results show that the magnitude of DIF contamination in the matching score, as measured by the average…
Descriptors: Discriminant Analysis, Test Bias, Research Methodology, Test Items
Wang, Wen-Chung; Cheng, Ying-Yao; Wilson, Mark – Educational and Psychological Measurement, 2005
A parallel design, in which items across different scales within an instrument share common stimuli and subjects respond to the common stimulus for each scale, is sometimes used in questionnaires or inventories. Because the items across scales share the same stimuli, the assumption of local item independence may not hold, thereby violating the…
Descriptors: Stimuli, Psychometrics, Test Items, Item Response Theory
Wang, Wen-Chung; Chen, Cheng-Te – Educational and Psychological Measurement, 2005
This study investigates item parameter recovery, standard error estimates, and fit statistics yielded by the WINSTEPS program under the Rasch model and the rating scale model through Monte Carlo simulations. The independent variables were item response model, test length, and sample size. WINSTEPS yielded practically unbiased estimates for the…
Descriptors: Statistics, Test Length, Rating Scales, Item Response Theory