Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Computer Assisted Testing | 4 |
Graduate Students | 4 |
Item Response Theory | 4 |
Test Items | 4 |
Test Format | 3 |
Undergraduate Students | 3 |
Adaptive Testing | 2 |
College Entrance Examinations | 2 |
Comparative Analysis | 2 |
Correlation | 2 |
Difficulty Level | 2 |
More ▼ |
Author
Bulut, Okan | 1 |
Iran-Nejad, Asghar | 1 |
Kalender, Ilker | 1 |
Kan, Adnan | 1 |
Kaya, Elif | 1 |
O'Grady, Stefan | 1 |
Thoma, Stephen J. | 1 |
Wise, Steven L. | 1 |
Xu, Yuejin | 1 |
Publication Type
Journal Articles | 4 |
Reports - Research | 4 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Secondary Education | 1 |
Audience
Location
Turkey | 1 |
Turkey (Ankara) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Defining Issues Test | 1 |
What Works Clearinghouse Rating
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Bulut, Okan; Kan, Adnan – Eurasian Journal of Educational Research, 2012
Problem Statement: Computerized adaptive testing (CAT) is a sophisticated and efficient way of delivering examinations. In CAT, items for each examinee are selected from an item bank based on the examinee's responses to the items. In this way, the difficulty level of the test is adjusted based on the examinee's ability level. Instead of…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Entrance Examinations, Graduate Students
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory

Wise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level