Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 8 |
Descriptor
Test Format | 8 |
Test Length | 8 |
Test Items | 6 |
Comparative Analysis | 5 |
Computer Assisted Testing | 5 |
Item Response Theory | 4 |
Sample Size | 4 |
Monte Carlo Methods | 3 |
Test Construction | 3 |
Accuracy | 2 |
Adaptive Testing | 2 |
More ▼ |
Source
Applied Measurement in… | 1 |
Education and Information… | 1 |
International Journal of… | 1 |
Measurement:… | 1 |
Participatory Educational… | 1 |
ProQuest LLC | 1 |
Quality Assurance in… | 1 |
Research Matters | 1 |
Author
Benton, Tom | 1 |
Dogan, Nuri | 1 |
Erdem-Kara, Basak | 1 |
Gelbal, Selahattin | 1 |
Jing Ma | 1 |
Kiliç, Abdullah Faruk | 1 |
Kárász, Judit T. | 1 |
Lixin Yuan | 1 |
Luo, Yong | 1 |
Minqiang Zhang | 1 |
Ozdemir, Burhanettin | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 7 |
Dissertations/Theses -… | 1 |
Education Level
Audience
Location
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jing Ma – ProQuest LLC, 2024
This study investigated the impact of scoring polytomous items later on measurement precision, classification accuracy, and test security in mixed-format adaptive testing. Utilizing the shadow test approach, a simulation study was conducted across various test designs, lengths, number and location of polytomous item. Results showed that while…
Descriptors: Scoring, Adaptive Testing, Test Items, Classification
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Luo, Yong – Measurement: Interdisciplinary Research and Perspectives, 2021
To date, only frequentist model-selection methods have been studied with mixed-format data in the context of IRT model-selection, and it is unknown how popular Bayesian model-selection methods such as DIC, WAIC, and LOO perform. In this study, we present the results of a comprehensive simulation study that compared the performances of eight…
Descriptors: Item Response Theory, Test Format, Selection, Methods
Erdem-Kara, Basak; Dogan, Nuri – International Journal of Assessment Tools in Education, 2022
Recently, adaptive test approaches have become a viable alternative to traditional fixed-item tests. The main advantage of adaptive tests is that they reach desired measurement precision with fewer items. However, fewer items mean that each item has a more significant effect on ability estimation and therefore those tests are open to more…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Test Construction
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Uysal, Ibrahim; Sahin-Kürsad, Merve; Kiliç, Abdullah Faruk – Participatory Educational Research, 2022
The aim of the study was to examine the common items in the mixed format (e.g., multiple-choices and essay items) contain parameter drifts in the test equating processes performed with the common item nonequivalent groups design. In this study, which was carried out using Monte Carlo simulation with a fully crossed design, the factors of test…
Descriptors: Test Items, Test Format, Item Response Theory, Equated Scores
Benton, Tom – Research Matters, 2021
Computer adaptive testing is intended to make assessment more reliable by tailoring the difficulty of the questions a student has to answer to their level of ability. Most commonly, this benefit is used to justify the length of tests being shortened whilst retaining the reliability of a longer, non-adaptive test. Improvements due to adaptive…
Descriptors: Risk, Item Response Theory, Computer Assisted Testing, Difficulty Level
Ozdemir, Burhanettin; Gelbal, Selahattin – Education and Information Technologies, 2022
The computerized adaptive tests (CAT) apply an adaptive process in which the items are tailored to individuals' ability scores. The multidimensional CAT (MCAT) designs differ in terms of different item selection, ability estimation, and termination methods being used. This study aims at investigating the performance of the MCAT designs used to…
Descriptors: Scores, Computer Assisted Testing, Test Items, Language Proficiency