Publication Date
In 2025 | 2 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 21 |
Since 2016 (last 10 years) | 42 |
Since 2006 (last 20 years) | 103 |
Descriptor
Adaptive Testing | 169 |
Item Response Theory | 169 |
Test Items | 169 |
Computer Assisted Testing | 146 |
Simulation | 47 |
Item Banks | 44 |
Test Construction | 42 |
Difficulty Level | 33 |
Foreign Countries | 26 |
Ability | 25 |
Estimation (Mathematics) | 23 |
More ▼ |
Source
Author
Glas, Cees A. W. | 7 |
Chang, Hua-Hua | 5 |
Davey, Tim | 4 |
Dodd, Barbara G. | 4 |
Meijer, Rob R. | 4 |
Samejima, Fumiko | 4 |
van der Linden, Wim J. | 4 |
Berger, Martijn P. F. | 3 |
De Ayala, R. J. | 3 |
Plake, Barbara S. | 3 |
Qian, Hong | 3 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 1 |
Researchers | 1 |
Location
Indonesia | 3 |
Netherlands | 3 |
Turkey | 3 |
Florida | 2 |
Taiwan | 2 |
Argentina | 1 |
Australia | 1 |
California | 1 |
China | 1 |
Denmark | 1 |
Idaho | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Kylie Gorney; Mark D. Reckase – Journal of Educational Measurement, 2025
In computerized adaptive testing, item exposure control methods are often used to provide a more balanced usage of the item pool. Many of the most popular methods, including the restricted method (Revuelta and Ponsoda), use a single maximum exposure rate to limit the proportion of times that each item is administered. However, Barrada et al.…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Bayesian Logistic Regression: A New Method to Calibrate Pretest Items in Multistage Adaptive Testing
TsungHan Ho – Applied Measurement in Education, 2023
An operational multistage adaptive test (MST) requires the development of a large item bank and the effort to continuously replenish the item bank due to concerns about test security and validity over the long term. New items should be pretested and linked to the item bank before being used operationally. The linking item volume fluctuations in…
Descriptors: Bayesian Statistics, Regression (Statistics), Test Items, Pretesting
Lim, Hwanggyu; Choe, Edison M. – Journal of Educational Measurement, 2023
The residual differential item functioning (RDIF) detection framework was developed recently under a linear testing context. To explore the potential application of this framework to computerized adaptive testing (CAT), the present study investigated the utility of the RDIF[subscript R] statistic both as an index for detecting uniform DIF of…
Descriptors: Test Items, Computer Assisted Testing, Item Response Theory, Adaptive Testing
Montserrat Beatriz Valdivia Medinaceli – ProQuest LLC, 2023
My dissertation examines three current challenges of international large-scale assessments (ILSAs) associated with the transition from linear testing to an adaptive testing design. ILSAs are important for making comparisons among populations and informing countries about the quality of their educational systems. ILSA's results inform policymakers…
Descriptors: International Assessment, Achievement Tests, Adaptive Testing, Test Items
Stefanie A. Wind; Beyza Aksu-Dunya – Applied Measurement in Education, 2024
Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent…
Descriptors: Response Style (Tests), Computer Assisted Testing, Adaptive Testing, Affective Measures
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Diyorjon Abdullaev; Djuraeva Laylo Shukhratovna; Jamoldinova Odinaxon Rasulovna; Jumanazarov Umid Umirzakovich; Olga V. Staroverova – International Journal of Language Testing, 2024
Local item dependence (LID) refers to the situation where responses to items in a test or questionnaire are influenced by responses to other items in the test. This could be due to shared prompts, item content similarity, and deficiencies in item construction. LID due to a shared prompt is highly probable in cloze tests where items are nested…
Descriptors: Undergraduate Students, Foreign Countries, English (Second Language), Second Language Learning
Umi Laili Yuhana; Eko Mulyanto Yuniarno; Wenny Rahayu; Eric Pardede – Education and Information Technologies, 2024
In an online learning environment, it is important to establish a suitable assessment approach that can be adapted on the fly to accommodate the varying learning paces of students. At the same time, it is essential that assessment criteria remain compliant with the expected learning outcomes of the relevant education standard which predominantly…
Descriptors: Adaptive Testing, Electronic Learning, Elementary School Students, Student Evaluation
Ozge Ersan Cinar – ProQuest LLC, 2022
In educational tests, a group of questions related to a shared stimulus is called a testlet (e.g., a reading passage with multiple related questions). Use of testlets is very common in educational tests. Additionally, computerized adaptive testing (CAT) is a mode of testing where the test forms are created in real time tailoring to the test…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Educational Testing
Kreitchmann, Rodrigo S.; Sorrel, Miguel A.; Abad, Francisco J. – Educational and Psychological Measurement, 2023
Multidimensional forced-choice (FC) questionnaires have been consistently found to reduce the effects of socially desirable responding and faking in noncognitive assessments. Although FC has been considered problematic for providing ipsative scores under the classical test theory, item response theory (IRT) models enable the estimation of…
Descriptors: Measurement Techniques, Questionnaires, Social Desirability, Adaptive Testing
Hanif Akhtar – International Society for Technology, Education, and Science, 2023
For efficiency, Computerized Adaptive Test (CAT) algorithm selects items with the maximum information, typically with a 50% probability of being answered correctly. However, examinees may not be satisfied if they only correctly answer 50% of the items. Researchers discovered that changing the item selection algorithms to choose easier items (i.e.,…
Descriptors: Success, Probability, Computer Assisted Testing, Adaptive Testing
Cai, Liuhan; Albano, Anthony D.; Roussos, Louis A. – Measurement: Interdisciplinary Research and Perspectives, 2021
Multistage testing (MST), an adaptive test delivery mode that involves algorithmic selection of predefined item modules rather than individual items, offers a practical alternative to linear and fully computerized adaptive testing. However, interactions across stages between item modules and examinee groups can lead to challenges in item…
Descriptors: Adaptive Testing, Test Items, Item Response Theory, Test Construction
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Wyse, Adam E.; McBride, James R. – Journal of Educational Measurement, 2021
A key consideration when giving any computerized adaptive test (CAT) is how much adaptation is present when the test is used in practice. This study introduces a new framework to measure the amount of adaptation of Rasch-based CATs based on looking at the differences between the selected item locations (Rasch item difficulty parameters) of the…
Descriptors: Item Response Theory, Computer Assisted Testing, Adaptive Testing, Test Items