Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Item Banks | 6 |
Computer Assisted Testing | 4 |
Adaptive Testing | 3 |
Item Response Theory | 3 |
Psychometrics | 3 |
Accuracy | 2 |
Models | 2 |
Psychological Testing | 2 |
Statistical Analysis | 2 |
Test Items | 2 |
Ability | 1 |
More ▼ |
Source
International Journal of… | 6 |
Author
Kubinger, Klaus D. | 2 |
Bass, Michael | 1 |
Beyza Aksu Dunya | 1 |
Gierl, Mark J. | 1 |
Howard, Elizabeth | 1 |
Lai, Hollis | 1 |
Lin, Jie | 1 |
Morris, Scott B. | 1 |
Neapolitan, Richard E. | 1 |
Stefanie Wind | 1 |
Wei, Hua | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 3 |
Reports - Descriptive | 2 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Canada | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Beyza Aksu Dunya; Stefanie Wind – International Journal of Testing, 2025
We explored the practicality of relatively small item pools in the context of low-stakes Computer-Adaptive Testing (CAT), such as CAT procedures that might be used for quick diagnostic or screening exams. We used a basic CAT algorithm without content balancing and exposure control restrictions to reflect low stakes testing scenarios. We examined…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Achievement
Morris, Scott B.; Bass, Michael; Howard, Elizabeth; Neapolitan, Richard E. – International Journal of Testing, 2020
The standard error (SE) stopping rule, which terminates a computer adaptive test (CAT) when the "SE" is less than a threshold, is effective when there are informative questions for all trait levels. However, in domains such as patient-reported outcomes, the items in a bank might all target one end of the trait continuum (e.g., negative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Item Response Theory
Wei, Hua; Lin, Jie – International Journal of Testing, 2015
Out-of-level testing refers to the practice of assessing a student with a test that is intended for students at a higher or lower grade level. Although the appropriateness of out-of-level testing for accountability purposes has been questioned by educators and policymakers, incorporating out-of-level items in formative assessments for accurate…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Instructional Program Divisions
Gierl, Mark J.; Lai, Hollis – International Journal of Testing, 2012
Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…
Descriptors: Foreign Countries, Psychometrics, Test Construction, Test Items
Kubinger, Klaus D. – International Journal of Testing, 2005
This article emphasizes that the Rasch model is not only very useful for psychological test calibration but is also necessary if the number of solved items is to be used as an examinee's score. Simplified proof that the Rasch model implies specific objective parameter comparisons is given. Consequently, a model check per se is possible. For data…
Descriptors: Psychology, Psychological Testing, Item Banks, Item Response Theory
Kubinger, Klaus D. – International Journal of Testing, 2005
In this article, we emphasize that the Rasch model is not only very useful for psychological test calibration but is also necessary if the number of solved items is to be used as an examinee's score. Simplified proof that the Rasch model implies specific objective parameter comparisons is given. Consequently, a model check per se is possible. For…
Descriptors: Psychometrics, Psychological Testing, Item Banks, Item Response Theory