Publication Date
In 2025 | 2 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 33 |
Since 2016 (last 10 years) | 76 |
Since 2006 (last 20 years) | 168 |
Descriptor
Computer Assisted Testing | 252 |
Item Response Theory | 252 |
Test Items | 252 |
Adaptive Testing | 145 |
Test Construction | 80 |
Item Banks | 66 |
Simulation | 54 |
Difficulty Level | 50 |
Foreign Countries | 45 |
Item Analysis | 44 |
Psychometrics | 36 |
More ▼ |
Source
Author
Alonzo, Julie | 15 |
Tindal, Gerald | 15 |
Anderson, Daniel | 7 |
Lai, Cheng Fei | 7 |
Park, Bitnara Jasmine | 7 |
Chang, Hua-Hua | 6 |
Glas, Cees A. W. | 6 |
van der Linden, Wim J. | 6 |
Davey, Tim | 4 |
Dodd, Barbara G. | 4 |
Meijer, Rob R. | 4 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 2 |
Practitioners | 1 |
Students | 1 |
Location
Oregon | 8 |
Netherlands | 4 |
Turkey | 4 |
Australia | 3 |
Germany | 3 |
Indonesia | 3 |
Taiwan | 3 |
Denmark | 2 |
Florida | 2 |
Singapore | 2 |
United Kingdom | 2 |
More ▼ |
Laws, Policies, & Programs
Individuals with Disabilities… | 8 |
Assessments and Surveys
What Works Clearinghouse Rating
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Jackson, Kayla – ProQuest LLC, 2023
Prior research highlights the benefits of multimode surveys and best practices for item-by-item (IBI) and matrix-type survey items. Some researchers have explored whether mode differences for online and paper surveys persist for these survey item types. However, no studies discuss measurement invariance when both item types and online modes are…
Descriptors: Test Items, Surveys, Error of Measurement, Item Response Theory
Demir, Seda – Journal of Educational Technology and Online Learning, 2022
The purpose of this research was to evaluate the effect of item pool and selection algorithms on computerized classification testing (CCT) performance in terms of some classification evaluation metrics. For this purpose, 1000 examinees' response patterns using the R package were generated and eight item pools with 150, 300, 450, and 600 items…
Descriptors: Test Items, Item Banks, Mathematics, Computer Assisted Testing
Lim, Hwanggyu; Choe, Edison M. – Journal of Educational Measurement, 2023
The residual differential item functioning (RDIF) detection framework was developed recently under a linear testing context. To explore the potential application of this framework to computerized adaptive testing (CAT), the present study investigated the utility of the RDIF[subscript R] statistic both as an index for detecting uniform DIF of…
Descriptors: Test Items, Computer Assisted Testing, Item Response Theory, Adaptive Testing
Stefanie A. Wind; Beyza Aksu-Dunya – Applied Measurement in Education, 2024
Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent…
Descriptors: Response Style (Tests), Computer Assisted Testing, Adaptive Testing, Affective Measures
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2022
Two independent statistical tests of item compromise are presented, one based on the test takers' responses and the other on their response times (RTs) on the same items. The tests can be used to monitor an item in real time during online continuous testing but are also applicable as part of post hoc forensic analysis. The two test statistics are…
Descriptors: Test Items, Item Analysis, Item Response Theory, Computer Assisted Testing
Aybek, Eren Can – Journal of Applied Testing Technology, 2021
The study aims to introduce catIRT tools which facilitates researchers' Item Response Theory (IRT) and Computerized Adaptive Testing (CAT) simulations. catIRT tools provides an interface for mirt and catR packages through the shiny package in R. Through this interface, researchers can apply IRT calibration and CAT simulations although they do not…
Descriptors: Item Response Theory, Computer Assisted Testing, Simulation, Models
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Ozge Ersan Cinar – ProQuest LLC, 2022
In educational tests, a group of questions related to a shared stimulus is called a testlet (e.g., a reading passage with multiple related questions). Use of testlets is very common in educational tests. Additionally, computerized adaptive testing (CAT) is a mode of testing where the test forms are created in real time tailoring to the test…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Educational Testing
Kreitchmann, Rodrigo S.; Sorrel, Miguel A.; Abad, Francisco J. – Educational and Psychological Measurement, 2023
Multidimensional forced-choice (FC) questionnaires have been consistently found to reduce the effects of socially desirable responding and faking in noncognitive assessments. Although FC has been considered problematic for providing ipsative scores under the classical test theory, item response theory (IRT) models enable the estimation of…
Descriptors: Measurement Techniques, Questionnaires, Social Desirability, Adaptive Testing
Hanif Akhtar – International Society for Technology, Education, and Science, 2023
For efficiency, Computerized Adaptive Test (CAT) algorithm selects items with the maximum information, typically with a 50% probability of being answered correctly. However, examinees may not be satisfied if they only correctly answer 50% of the items. Researchers discovered that changing the item selection algorithms to choose easier items (i.e.,…
Descriptors: Success, Probability, Computer Assisted Testing, Adaptive Testing
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Che Lah, Noor Hidayah; Tasir, Zaidatun; Jumaat, Nurul Farhana – Educational Studies, 2023
The aim of the study was to evaluate the extended version of the Problem-Solving Inventory (PSI) via an online learning setting known as the Online Problem-Solving Inventory (OPSI) through the lens of Rasch Model analysis. To date, there is no extended version of the PSI for online settings even though many researchers have used it; thus, this…
Descriptors: Problem Solving, Measures (Individuals), Electronic Learning, Item Response Theory
Wyse, Adam E.; McBride, James R. – Journal of Educational Measurement, 2021
A key consideration when giving any computerized adaptive test (CAT) is how much adaptation is present when the test is used in practice. This study introduces a new framework to measure the amount of adaptation of Rasch-based CATs based on looking at the differences between the selected item locations (Rasch item difficulty parameters) of the…
Descriptors: Item Response Theory, Computer Assisted Testing, Adaptive Testing, Test Items