Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Item Sampling | 7 |
Models | 7 |
Item Response Theory | 3 |
Measurement Techniques | 3 |
Difficulty Level | 2 |
Foreign Countries | 2 |
Measurement | 2 |
Test Items | 2 |
Test Length | 2 |
Academic Achievement | 1 |
Bayesian Statistics | 1 |
More ▼ |
Source
Educational and Psychological… | 2 |
Assessment & Evaluation in… | 1 |
Educational Researcher | 1 |
Evaluation in Education:… | 1 |
Psychometrika | 1 |
Universal Journal of… | 1 |
Author
Publication Type
Journal Articles | 7 |
Reports - Research | 3 |
Reports - Descriptive | 2 |
Reports - Evaluative | 1 |
Reports - General | 1 |
Education Level
Grade 10 | 1 |
Grade 9 | 1 |
High Schools | 1 |
Higher Education | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Damrongpanit, Suntonrapot – Universal Journal of Educational Research, 2019
The purposes of this study were to test the structural validity and to test the parameters invariance of the self-discipline measurement model for good student citizenship among the models, using the data from the 1,047 complete questionnaires and the reducing length questionnaires with multiple matrix sampling technique. The sample size of this…
Descriptors: Factor Structure, Questionnaires, Test Length, Citizenship
Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas – Educational and Psychological Measurement, 2015
Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…
Descriptors: Measurement, Item Sampling, Statistical Analysis, Models
Schumacker, Randall E.; Smith, Everett V., Jr. – Educational and Psychological Measurement, 2007
Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…
Descriptors: Measurement Techniques, Error of Measurement, Item Sampling, Item Response Theory
Burton, Richard F. – Assessment & Evaluation in Higher Education, 2006
Many academic tests (e.g. short-answer and multiple-choice) sample required knowledge with questions scoring 0 or 1 (dichotomous scoring). Few textbooks give useful guidance on the length of test needed to do this reliably. Posey's binomial error model of 1932 provides the best starting point, but allows neither for heterogeneity of question…
Descriptors: Item Sampling, Tests, Test Length, Test Reliability

Bock, R. Darrell; And Others – Educational Researcher, 1982
Describes the evolution of national educational assessment in the United States, its present stage of development, methodological and reporting problems, and approaches applicable to assessment data. Examines how proposed approaches can be applied in the California Assessment Program and in the National Assessment of Educational Progress. (MJL)
Descriptors: Academic Achievement, Educational Assessment, Evaluation Methods, Item Sampling
van den Brink, Wulfert – Evaluation in Education: International Progress, 1982
Binomial models for domain-referenced testing are compared, emphasizing the assumptions underlying the beta-binomial model. Advantages and disadvantages are discussed. A proposed item sampling model is presented which takes the effect of guessing into account. (Author/CM)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Sampling, Measurement Techniques
Revuelta, Javier – Psychometrika, 2004
Two psychometric models are presented for evaluating the difficulty of the distractors in multiple-choice items. They are based on the criterion of rising distractor selection ratios, which facilitates interpretation of the subject and item parameters. Statistical inferential tools are developed in a Bayesian framework: modal a posteriori…
Descriptors: Multiple Choice Tests, Psychometrics, Models, Difficulty Level