Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Computer Software | 6 |
Test Items | 6 |
Computer Assisted Testing | 3 |
Models | 3 |
Classification | 2 |
Educational Assessment | 2 |
Foreign Countries | 2 |
Item Response Theory | 2 |
Programming | 2 |
Psychometrics | 2 |
Simulation | 2 |
More ▼ |
Source
International Journal of… | 6 |
Author
Baghaei, Purya | 1 |
Gattamorta, Karina A. | 1 |
Gierl, Mark J. | 1 |
Lai, Hollis | 1 |
Myers, Nicholas D. | 1 |
Penfield, Randall D. | 1 |
Ravand, Hamdollah | 1 |
Rupp, Andre A. | 1 |
Veldkamp, Bernard P. | 1 |
Walker, Cindy M. | 1 |
Publication Type
Journal Articles | 6 |
Reports - Evaluative | 3 |
Guides - Non-Classroom | 1 |
Reports - Descriptive | 1 |
Reports - Research | 1 |
Education Level
Elementary Secondary Education | 2 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Canada | 2 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ravand, Hamdollah; Baghaei, Purya – International Journal of Testing, 2020
More than three decades after their introduction, diagnostic classification models (DCM) do not seem to have been implemented in educational systems for the purposes they were devised. Most DCM research is either methodological for model development and refinement or retrofitting to existing nondiagnostic tests and, in the latter case, basically…
Descriptors: Classification, Models, Diagnostic Tests, Test Construction
Gierl, Mark J.; Lai, Hollis – International Journal of Testing, 2012
Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…
Descriptors: Foreign Countries, Psychometrics, Test Construction, Test Items
Gattamorta, Karina A.; Penfield, Randall D.; Myers, Nicholas D. – International Journal of Testing, 2012
Measurement invariance is a common consideration in the evaluation of the validity and fairness of test scores when the tested population contains distinct groups of examinees, such as examinees receiving different forms of a translated test. Measurement invariance in polytomous items has traditionally been evaluated at the item-level,…
Descriptors: Foreign Countries, Psychometrics, Test Bias, Test Items
Veldkamp, Bernard P. – International Journal of Testing, 2008
Integrity[TM], an online application for testing both the statistical integrity of the test and the academic integrity of the examinees, was evaluated for this review. Program features and the program output are described. An overview of the statistics in Integrity[TM] is provided, and the application is illustrated with a small simulation study.…
Descriptors: Simulation, Integrity, Statistics, Computer Assisted Testing

Walker, Cindy M. – International Journal of Testing, 2001
Provides a tutorial on differential item functioning (DIF) and reviews DIFPACK, a new software package that is specifically designed to test for the presence of DIF. DIFPACK allows the user to test for standard unidirectional DIF, DIF in dichotomous items, DIF in polytomous items, and disordinal, or crossing, DIF. (SLD)
Descriptors: Computer Software, Identification, Item Bias, Test Items
Rupp, Andre A. – International Journal of Testing, 2003
Item response theory (IRT) has become one of the most popular scoring frameworks for measurement data. IRT models are used frequently in computerized adaptive testing, cognitively diagnostic assessment, and test equating. This article reviews two of the most popular software packages for IRT model estimation, BILOG-MG (Zimowski, Muraki, Mislevy, &…
Descriptors: Test Items, Adaptive Testing, Item Response Theory, Computer Software