Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Difficulty Level | 3 |
Item Sampling | 3 |
Models | 3 |
Test Items | 2 |
Adaptive Testing | 1 |
Bayesian Statistics | 1 |
Branching | 1 |
Computer Oriented Programs | 1 |
Decision Making | 1 |
Guessing (Tests) | 1 |
Item Response Theory | 1 |
More ▼ |
Publication Type
Journal Articles | 2 |
Reports - Descriptive | 1 |
Reports - Research | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Burton, Richard F. – Assessment & Evaluation in Higher Education, 2006
Many academic tests (e.g. short-answer and multiple-choice) sample required knowledge with questions scoring 0 or 1 (dichotomous scoring). Few textbooks give useful guidance on the length of test needed to do this reliably. Posey's binomial error model of 1932 provides the best starting point, but allows neither for heterogeneity of question…
Descriptors: Item Sampling, Tests, Test Length, Test Reliability
Revuelta, Javier – Psychometrika, 2004
Two psychometric models are presented for evaluating the difficulty of the distractors in multiple-choice items. They are based on the criterion of rising distractor selection ratios, which facilitates interpretation of the subject and item parameters. Statistical inferential tools are developed in a Bayesian framework: modal a posteriori…
Descriptors: Multiple Choice Tests, Psychometrics, Models, Difficulty Level
Kalisch, Stanley J. – 1974
A tailored testing model employing the beta distribution, whose mean equals the difficulty of an item and whose variance is approximately equal to the sampling variance of the item difficulty, and employing conditional item difficulties, is proposed. The model provides a procedure by which a minimum number of items of a test, consisting of a set…
Descriptors: Adaptive Testing, Branching, Computer Oriented Programs, Decision Making