Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 7 |
Descriptor
Monte Carlo Methods | 10 |
Item Response Theory | 7 |
Models | 7 |
Computation | 6 |
Markov Processes | 6 |
Foreign Countries | 4 |
Test Bias | 4 |
Bayesian Statistics | 3 |
Goodness of Fit | 3 |
Simulation | 3 |
Test Items | 3 |
More ▼ |
Source
Educational and Psychological… | 3 |
Journal of Educational… | 3 |
Applied Measurement in… | 1 |
Applied Psychological… | 1 |
Journal of Educational and… | 1 |
Journal of Experimental… | 1 |
Author
Wang, Wen-Chung | 10 |
Huang, Hung-Yu | 4 |
Chen, Cheng-Te | 1 |
Chen, Po-Hsi | 1 |
Hung, Lai-Fa | 1 |
Li, Xiaomin | 1 |
Shih, Ching-Lin | 1 |
Su, Chi-Ming | 1 |
Su, Ya-Hui | 1 |
Wilson, Mark | 1 |
Publication Type
Journal Articles | 10 |
Reports - Research | 8 |
Reports - Evaluative | 2 |
Education Level
Higher Education | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Postsecondary Education | 2 |
Secondary Education | 2 |
Elementary Education | 1 |
Grade 4 | 1 |
Intermediate Grades | 1 |
Audience
Location
Taiwan | 4 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Students Evaluation of… | 1 |
Trends in International… | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
Assessment of Differential Item Functioning under Cognitive Diagnosis Models: The DINA Model Example
Li, Xiaomin; Wang, Wen-Chung – Journal of Educational Measurement, 2015
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…
Descriptors: Test Bias, Models, Cognitive Measurement, Evaluation Methods
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Hung, Lai-Fa; Wang, Wen-Chung – Journal of Educational and Behavioral Statistics, 2012
In the human sciences, ability tests or psychological inventories are often repeatedly conducted to measure growth. Standard item response models do not take into account possible autocorrelation in longitudinal data. In this study, the authors propose an item response model to account for autocorrelation. The proposed three-level model consists…
Descriptors: Item Response Theory, Correlation, Models, Longitudinal Studies
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2013
Both testlet design and hierarchical latent traits are fairly common in educational and psychological measurements. This study aimed to develop a new class of higher order testlet response models that consider both local item dependence within testlets and a hierarchy of latent traits. Due to high dimensionality, the authors adopted the Bayesian…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Computation
Huang, Hung-Yu; Wang, Wen-Chung; Chen, Po-Hsi; Su, Chi-Ming – Applied Psychological Measurement, 2013
Many latent traits in the human sciences have a hierarchical structure. This study aimed to develop a new class of higher order item response theory models for hierarchical latent traits that are flexible in accommodating both dichotomous and polytomous items, to estimate both item and person parameters jointly, to allow users to specify…
Descriptors: Item Response Theory, Models, Vertical Organization, Bayesian Statistics
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Computation, Test Reliability
Wang, Wen-Chung – Journal of Experimental Education, 2004
Scale indeterminacy in analysis of differential item functioning (DIF) within the framework of item response theory can be resolved by imposing 3 anchor item methods: the equal-mean-difficulty method, the all-other anchor item method, and the constant anchor item method. In this article, applicability and limitations of these 3 methods are…
Descriptors: Test Bias, Models, Item Response Theory, Comparative Analysis
Wang, Wen-Chung; Su, Ya-Hui – Applied Measurement in Education, 2004
In this study we investigated the effects of the average signed area (ASA) between the item characteristic curves of the reference and focal groups and three test purification procedures on the uniform differential item functioning (DIF) detection via the Mantel-Haenszel (M-H) method through Monte Carlo simulations. The results showed that ASA,…
Descriptors: Test Bias, Student Evaluation, Evaluation Methods, Test Items
Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin – Journal of Educational Measurement, 2006
This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…
Descriptors: Item Analysis, Rating Scales, Item Response Theory, Monte Carlo Methods
Wang, Wen-Chung; Chen, Cheng-Te – Educational and Psychological Measurement, 2005
This study investigates item parameter recovery, standard error estimates, and fit statistics yielded by the WINSTEPS program under the Rasch model and the rating scale model through Monte Carlo simulations. The independent variables were item response model, test length, and sample size. WINSTEPS yielded practically unbiased estimates for the…
Descriptors: Statistics, Test Length, Rating Scales, Item Response Theory