Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 8 |
Since 2016 (last 10 years) | 20 |
Since 2006 (last 20 years) | 42 |
Descriptor
Computation | 47 |
Computer Assisted Testing | 47 |
Test Items | 47 |
Adaptive Testing | 34 |
Item Response Theory | 18 |
Item Banks | 10 |
Accuracy | 9 |
Comparative Analysis | 9 |
Foreign Countries | 9 |
Test Construction | 8 |
Bayesian Statistics | 7 |
More ▼ |
Source
Author
Chang, Hua-Hua | 3 |
Davey, Tim | 2 |
He, Wei | 2 |
Herbert, Erin | 2 |
Penfield, Randall D. | 2 |
Rizavi, Saba | 2 |
Veldkamp, Bernard P. | 2 |
Wang, Chun | 2 |
Wang, Wen-Chung | 2 |
Way, Walter D. | 2 |
Ainley, John | 1 |
More ▼ |
Publication Type
Journal Articles | 40 |
Reports - Research | 30 |
Reports - Evaluative | 9 |
Reports - Descriptive | 7 |
Speeches/Meeting Papers | 2 |
Collected Works - General | 1 |
Numerical/Quantitative Data | 1 |
Reports - General | 1 |
Education Level
Higher Education | 4 |
Secondary Education | 4 |
Elementary Education | 3 |
Grade 8 | 2 |
High Schools | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Postsecondary Education | 2 |
Grade 11 | 1 |
Grade 12 | 1 |
Grade 4 | 1 |
More ▼ |
Audience
Researchers | 1 |
Location
United Kingdom | 3 |
Chile | 2 |
Denmark | 2 |
France | 2 |
Germany | 2 |
Italy | 2 |
Netherlands | 2 |
South Korea | 2 |
United States | 2 |
Australia | 1 |
Austria | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
California Achievement Tests | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms

Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Ersen, Rabia Karatoprak; Lee, Won-Chan – Journal of Educational Measurement, 2023
The purpose of this study was to compare calibration and linking methods for placing pretest item parameter estimates on the item pool scale in a 1-3 computerized multistage adaptive testing design in terms of item parameter recovery. Two models were used: embedded-section, in which pretest items were administered within a separate module, and…
Descriptors: Pretesting, Test Items, Computer Assisted Testing, Adaptive Testing
Yuan, Lu; Huang, Yingshi; Li, Shuhang; Chen, Ping – Journal of Educational Measurement, 2023
Online calibration is a key technology for item calibration in computerized adaptive testing (CAT) and has been widely used in various forms of CAT, including unidimensional CAT, multidimensional CAT (MCAT), CAT with polytomously scored items, and cognitive diagnostic CAT. However, as multidimensional and polytomous assessment data become more…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computation, Test Items
Development of a High-Accuracy and Effective Online Calibration Method in CD-CAT Based on Gini Index
Tan, Qingrong; Cai, Yan; Luo, Fen; Tu, Dongbo – Journal of Educational and Behavioral Statistics, 2023
To improve the calibration accuracy and calibration efficiency of cognitive diagnostic computerized adaptive testing (CD-CAT) for new items and, ultimately, contribute to the widespread application of CD-CAT in practice, the current article proposed a Gini-based online calibration method that can simultaneously calibrate the Q-matrix and item…
Descriptors: Cognitive Tests, Computer Assisted Testing, Adaptive Testing, Accuracy
Pearson, Christopher; Penna, Nigel – Assessment & Evaluation in Higher Education, 2023
E-assessments are becoming increasingly common and progressively more complex. Consequently, how these longer, more complex questions are designed and marked is imperative. This article uses the NUMBAS e-assessment tool to investigate the best practice for creating longer questions and their mark schemes on surveying modules taken by engineering…
Descriptors: Automation, Scoring, Engineering Education, Foreign Countries
Chen, Chia-Wen; Wang, Wen-Chung; Chiu, Ming Ming; Ro, Sage – Journal of Educational Measurement, 2020
The use of computerized adaptive testing algorithms for ranking items (e.g., college preferences, career choices) involves two major challenges: unacceptably high computation times (selecting from a large item pool with many dimensions) and biased results (enhanced preferences or intensified examinee responses because of repeated statements across…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Öztürk, Nagihan Boztunç – Universal Journal of Educational Research, 2019
In this study, how the length and characteristics of routing module in different panel designs affect measurement precision is examined. In the scope of the study, six different routing module length, nine different routing module characteristics, and two different panel design are handled. At the end of the study, the effects of conditions on…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Length, Test Format
Choe, Edison M.; Kern, Justin L.; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2018
Despite common operationalization, measurement efficiency of computerized adaptive testing should not only be assessed in terms of the number of items administered but also the time it takes to complete the test. To this end, a recent study introduced a novel item selection criterion that maximizes Fisher information per unit of expected response…
Descriptors: Computer Assisted Testing, Reaction Time, Item Response Theory, Test Items
Gawliczek, Piotr; Krykun, Viktoriia; Tarasenko, Nataliya; Tyshchenko, Maksym; Shapran, Oleksandr – Advanced Education, 2021
The article deals with the innovative, cutting age solution within the language testing realm, namely computer adaptive language testing (CALT) in accordance with the NATO Standardization Agreement 6001 (NATO STANAG 6001) requirements for further implementation in foreign language training of personnel of the Armed Forces of Ukraine (AF of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Language Tests, Second Language Instruction
Sahin, Alper; Ozbasi, Durmus – Eurasian Journal of Educational Research, 2017
Purpose: This study aims to reveal effects of content balancing and item selection method on ability estimation in computerized adaptive tests by comparing Fisher's maximum information (FMI) and likelihood weighted information (LWI) methods. Research Methods: Four groups of examinees (250, 500, 750, 1000) and a bank of 500 items with 10 different…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Content
Lin, Yin; Brown, Anna – Educational and Psychological Measurement, 2017
A fundamental assumption in computerized adaptive testing is that item parameters are invariant with respect to context--items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the…
Descriptors: Personality Measures, Measurement Techniques, Context Effect, Test Items
Fraillon, Julian, Ed.; Ainley, John, Ed.; Schulz, Wolfram, Ed.; Friedman, Tim, Ed.; Duckworth, Daniel, Ed. – International Association for the Evaluation of Educational Achievement, 2020
IEA's International Computer and Information Literacy Study (ICILS) 2018 investigated how well students are prepared for study, work, and life in a digital world. ICILS 2018 measured international differences in students' computer and information literacy (CIL): their ability to use computers to investigate, create, participate, and communicate at…
Descriptors: International Assessment, Computer Literacy, Information Literacy, Computer Assisted Testing