Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| Item Response Theory | 2 |
| Test Validity | 2 |
| Construct Validity | 1 |
| Correlation | 1 |
| Evaluation Methods | 1 |
| Likert Scales | 1 |
| Measurement Techniques | 1 |
| Psychological Testing | 1 |
| Questionnaires | 1 |
| Scores | 1 |
| Scoring | 1 |
| More ▼ | |
Source
| Psychological Methods | 2 |
Publication Type
| Journal Articles | 2 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Brown, Anna; Maydeu-Olivares, Alberto – Psychological Methods, 2013
In multidimensional forced-choice (MFC) questionnaires, items measuring different attributes are presented in blocks, and participants have to rank order the items within each block (fully or partially). Such comparative formats can reduce the impact of numerous response biases often affecting single-stimulus items (aka rating or Likert scales).…
Descriptors: Test Validity, Item Response Theory, Scoring, Questionnaires
Wang, Wen-Chung; Chen, Po-Hsi; Cheng, Ying-Yao – Psychological Methods, 2004
A conventional way to analyze item responses in multiple tests is to apply unidimensional item response models separately, one test at a time. This unidimensional approach, which ignores the correlations between latent traits, yields imprecise measures when tests are short. To resolve this problem, one can use multidimensional item response models…
Descriptors: Item Response Theory, Test Items, Testing, Test Validity

Peer reviewed
Direct link
