Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 22 |
Descriptor
Models | 29 |
Item Response Theory | 20 |
Simulation | 14 |
Computation | 12 |
Test Items | 12 |
Computer Software | 7 |
Foreign Countries | 7 |
Monte Carlo Methods | 7 |
Markov Processes | 6 |
Test Bias | 5 |
Adaptive Testing | 4 |
More ▼ |
Source
Journal of Educational… | 9 |
Applied Psychological… | 7 |
Educational and Psychological… | 7 |
Journal of Applied Measurement | 2 |
AERA Online Paper Repository | 1 |
Journal of Educational and… | 1 |
Journal of Experimental… | 1 |
Author
Wang, Wen-Chung | 29 |
Jin, Kuan-Yu | 5 |
Huang, Hung-Yu | 4 |
Liu, Chen-Wei | 3 |
Wilson, Mark | 3 |
Hsu, Chia-Ling | 2 |
Li, Xiaomin | 2 |
Qiu, Xue-Lan | 2 |
Su, Chi-Ming | 2 |
Wu, Shiu-Lien | 2 |
Bian, Yufang | 1 |
More ▼ |
Publication Type
Journal Articles | 27 |
Reports - Research | 21 |
Reports - Evaluative | 7 |
Speeches/Meeting Papers | 2 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Junior High Schools | 2 |
Middle Schools | 2 |
Secondary Education | 2 |
Elementary Education | 1 |
Grade 4 | 1 |
Intermediate Grades | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Students Evaluation of… | 1 |
Trends in International… | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2018
The Rasch facets model was developed to account for facet data, such as student essays graded by raters, but it accounts for only one kind of rater effect (severity). In practice, raters may exhibit various tendencies such as using middle or extreme scores in their ratings, which is referred to as the rater centrality/extremity response style. To…
Descriptors: Scoring, Models, Interrater Reliability, Computation
Liu, Chen-Wei; Wang, Wen-Chung – Journal of Educational Measurement, 2017
The examinee-selected-item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Data Analysis
Zhan, Peida; Wang, Wen-Chung; Li, Xiaomin; Bian, Yufang – AERA Online Paper Repository, 2016
To measure individual difference in latent attributes more precisely, this study proposed a new cognitive diagnosis model (CDM), which is referred as the probabilistic-inputs, noisy conjunctive (PINC) model, by treating the deterministic binary latent attributes as probabilistic, and directly estimating the probability in the model. Simulation…
Descriptors: Probability, Models, Language Proficiency, Psychometrics
Wang, Wen-Chung; Su, Chi-Ming; Qiu, Xue-Lan – Journal of Educational Measurement, 2014
Ratings given to the same item response may have a stronger correlation than those given to different item responses, especially when raters interact with one another before giving ratings. The rater bundle model was developed to account for such local dependence by forming multiple ratings given to an item response as a bundle and assigning…
Descriptors: Item Response Theory, Interrater Reliability, Models, Correlation
Hsu, Chia-Ling; Wang, Wen-Chung – Journal of Educational Measurement, 2015
Cognitive diagnosis models provide profile information about a set of latent binary attributes, whereas item response models yield a summary report on a latent continuous trait. To utilize the advantages of both models, higher order cognitive diagnosis models were developed in which information about both latent binary attributes and latent…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Cognitive Measurement
Assessment of Differential Item Functioning under Cognitive Diagnosis Models: The DINA Model Example
Li, Xiaomin; Wang, Wen-Chung – Journal of Educational Measurement, 2015
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…
Descriptors: Test Bias, Models, Cognitive Measurement, Evaluation Methods
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Hsu, Chia-Ling; Wang, Wen-Chung; Chen, Shu-Ying – Applied Psychological Measurement, 2013
Interest in developing computerized adaptive testing (CAT) under cognitive diagnosis models (CDMs) has increased recently. CAT algorithms that use a fixed-length termination rule frequently lead to different degrees of measurement precision for different examinees. Fixed precision, in which the examinees receive the same degree of measurement…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Diagnostic Tests
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…
Descriptors: Student Evaluation, Item Response Theory, Models, Simulation
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien – Applied Psychological Measurement, 2013
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Bayesian Statistics
Wang, Wen-Chung; Jin, Kuan-Yu; Qiu, Xue-Lan; Wang, Lei – Journal of Educational Measurement, 2012
In some tests, examinees are required to choose a fixed number of items from a set of given items to answer. This practice creates a challenge to standard item response models, because more capable examinees may have an advantage by making wiser choices. In this study, we developed a new class of item response models to account for the choice…
Descriptors: Item Response Theory, Test Items, Selection, Models
Wang, Wen-Chung; Wu, Shiu-Lien – Journal of Educational Measurement, 2011
Rating scale items have been widely used in educational and psychological tests. These items require people to make subjective judgments, and these subjective judgments usually involve randomness. To account for this randomness, Wang, Wilson, and Shih proposed the random-effect rating scale model in which the threshold parameters are treated as…
Descriptors: Rating Scales, Models, Statistical Analysis, Computation
Hung, Lai-Fa; Wang, Wen-Chung – Journal of Educational and Behavioral Statistics, 2012
In the human sciences, ability tests or psychological inventories are often repeatedly conducted to measure growth. Standard item response models do not take into account possible autocorrelation in longitudinal data. In this study, the authors propose an item response model to account for autocorrelation. The proposed three-level model consists…
Descriptors: Item Response Theory, Correlation, Models, Longitudinal Studies
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2013
Both testlet design and hierarchical latent traits are fairly common in educational and psychological measurements. This study aimed to develop a new class of higher order testlet response models that consider both local item dependence within testlets and a hierarchy of latent traits. Due to high dimensionality, the authors adopted the Bayesian…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Computation
Huang, Hung-Yu; Wang, Wen-Chung; Chen, Po-Hsi; Su, Chi-Ming – Applied Psychological Measurement, 2013
Many latent traits in the human sciences have a hierarchical structure. This study aimed to develop a new class of higher order item response theory models for hierarchical latent traits that are flexible in accommodating both dichotomous and polytomous items, to estimate both item and person parameters jointly, to allow users to specify…
Descriptors: Item Response Theory, Models, Vertical Organization, Bayesian Statistics
Previous Page | Next Page ยป
Pages: 1 | 2