Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Item Response Theory | 3 |
Item Sampling | 3 |
Models | 3 |
Bayesian Statistics | 1 |
Difficulty Level | 1 |
Error of Measurement | 1 |
Foreign Countries | 1 |
Grade 10 | 1 |
Grade 9 | 1 |
Interrater Reliability | 1 |
Measurement | 1 |
More ▼ |
Author
Frey, Andreas | 1 |
Hecht, Martin | 1 |
Revuelta, Javier | 1 |
Schumacker, Randall E. | 1 |
Siegle, Thilo | 1 |
Smith, Everett V., Jr. | 1 |
Weirich, Sebastian | 1 |
Publication Type
Journal Articles | 3 |
Reports - Descriptive | 2 |
Reports - Research | 1 |
Education Level
Grade 10 | 1 |
Grade 9 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas – Educational and Psychological Measurement, 2015
Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…
Descriptors: Measurement, Item Sampling, Statistical Analysis, Models
Schumacker, Randall E.; Smith, Everett V., Jr. – Educational and Psychological Measurement, 2007
Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…
Descriptors: Measurement Techniques, Error of Measurement, Item Sampling, Item Response Theory
Revuelta, Javier – Psychometrika, 2004
Two psychometric models are presented for evaluating the difficulty of the distractors in multiple-choice items. They are based on the criterion of rising distractor selection ratios, which facilitates interpretation of the subject and item parameters. Statistical inferential tools are developed in a Bayesian framework: modal a posteriori…
Descriptors: Multiple Choice Tests, Psychometrics, Models, Difficulty Level