Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 43 |
Descriptor
Adaptive Testing | 160 |
Test Items | 160 |
Computer Assisted Testing | 141 |
Item Response Theory | 60 |
Simulation | 53 |
Test Construction | 52 |
Item Banks | 39 |
Estimation (Mathematics) | 33 |
Selection | 32 |
Ability | 26 |
Comparative Analysis | 23 |
More ▼ |
Source
Author
Stocking, Martha L. | 14 |
van der Linden, Wim J. | 10 |
Chang, Hua-Hua | 8 |
Berger, Martijn P. F. | 6 |
Wainer, Howard | 6 |
De Ayala, R. J. | 5 |
Zwick, Rebecca | 5 |
Chen, Shu-Ying | 4 |
Davey, Tim | 4 |
Plake, Barbara S. | 4 |
Samejima, Fumiko | 4 |
More ▼ |
Publication Type
Reports - Evaluative | 160 |
Journal Articles | 81 |
Speeches/Meeting Papers | 39 |
Information Analyses | 2 |
Guides - Non-Classroom | 1 |
Numerical/Quantitative Data | 1 |
Opinion Papers | 1 |
Reports - Research | 1 |
Education Level
Higher Education | 4 |
Elementary Education | 3 |
Elementary Secondary Education | 3 |
High Schools | 3 |
Secondary Education | 3 |
Grade 4 | 2 |
Postsecondary Education | 2 |
Grade 5 | 1 |
Grade 6 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
More ▼ |
Audience
Practitioners | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lim, Hwanggyu; Choe, Edison M. – Journal of Educational Measurement, 2023
The residual differential item functioning (RDIF) detection framework was developed recently under a linear testing context. To explore the potential application of this framework to computerized adaptive testing (CAT), the present study investigated the utility of the RDIF[subscript R] statistic both as an index for detecting uniform DIF of…
Descriptors: Test Items, Computer Assisted Testing, Item Response Theory, Adaptive Testing
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Cui, Zhongmin; Liu, Chunyan; He, Yong; Chen, Hanwei – Journal of Educational Measurement, 2018
Allowing item review in computerized adaptive testing (CAT) is getting more attention in the educational measurement field as more and more testing programs adopt CAT. The research literature has shown that allowing item review in an educational test could result in more accurate estimates of examinees' abilities. The practice of item review in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Wiseness
Cappaert, Kevin J.; Wen, Yao; Chang, Yu-Feng – Measurement: Interdisciplinary Research and Perspectives, 2018
Events such as curriculum changes or practice effects can lead to item parameter drift (IPD) in computer adaptive testing (CAT). The current investigation introduced a point- and weight-adjusted D[superscript 2] method for IPD detection for use in a CAT environment when items are suspected of drifting across test administrations. Type I error and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Identification
Yamamoto, Kentaro; Shin, Hyo Jeong; Khorramdel, Lale – OECD Publishing, 2019
This paper describes and evaluates a multistage adaptive testing (MSAT) design that was implemented for the Programme for International Student Assessment (PISA) 2018 main survey for the major domain of Reading. Through a simulation study, recovery of item response theory model parameters and measurement precision were examined. The PISA 2018 MSAT…
Descriptors: Adaptive Testing, Test Construction, Achievement Tests, Foreign Countries
Arendasy, Martin E.; Sommer, Markus – Learning and Individual Differences, 2012
The use of new test administration technologies such as computerized adaptive testing in high-stakes educational and occupational assessments demands large item pools. Classic item construction processes and previous approaches to automatic item generation faced the problems of a considerable loss of items after the item calibration phase. In this…
Descriptors: Item Banks, Test Items, Adaptive Testing, Psychometrics
Yao, Lihua – Applied Psychological Measurement, 2013
Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Lin, Chuan-Ju – Educational and Psychological Measurement, 2011
This study compares four item selection criteria for a two-category computerized classification testing: (1) Fisher information (FI), (2) Kullback-Leibler information (KLI), (3) weighted log-odds ratio (WLOR), and (4) mutual information (MI), with respect to the efficiency and accuracy of classification decision using the sequential probability…
Descriptors: Computer Assisted Testing, Adaptive Testing, Selection, Test Items
Khunkrai, Naruemon; Sawangboon, Tatsirin; Ketchatturat, Jatuphum – Educational Research and Reviews, 2015
The aim of this research is to study the accurate prediction of comparing test information and evaluation result by multidimensional computerized adaptive scholastic aptitude test program used for grade 9 students under different reviewing test conditions. Grade 9 students of the Secondary Educational Service Area Office in the North-east of…
Descriptors: Foreign Countries, Secondary School Students, Grade 9, Computer Assisted Testing
Chen, Shu-Ying – Applied Psychological Measurement, 2010
To date, exposure control procedures that are designed to control test overlap in computerized adaptive tests (CATs) are based on the assumption of item sharing between pairs of examinees. However, in practice, examinees may obtain test information from more than one previous test taker. This larger scope of information sharing needs to be…
Descriptors: Computer Assisted Testing, Adaptive Testing, Methods, Test Items
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Deng, Hui; Ansley, Timothy; Chang, Hua-Hua – Journal of Educational Measurement, 2010
In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…
Descriptors: Computer Assisted Testing, Adaptive Testing, Selection, Methods
Green, Bert F. – Applied Psychological Measurement, 2011
This article refutes a recent claim that computer-based tests produce biased scores for very proficient test takers who make mistakes on one or two initial items and that the "bias" can be reduced by using a four-parameter IRT model. Because the same effect occurs with pattern scores on nonadaptive tests, the effect results from IRT scoring, not…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Bias, Item Response Theory
Cheng, Ying – Educational and Psychological Measurement, 2010
This article proposes a new item selection method, namely, the modified maximum global discrimination index (MMGDI) method, for cognitive diagnostic computerized adaptive testing (CD-CAT). The new method captures two aspects of the appeal of an item: (a) the amount of contribution it can make toward adequate coverage of every attribute and (b) the…
Descriptors: Cognitive Tests, Diagnostic Tests, Computer Assisted Testing, Adaptive Testing
Yen, Yung-Chin; Ho, Rong-Guey; Laio, Wen-Wei; Chen, Li-Ju; Kuo, Ching-Chin – Applied Psychological Measurement, 2012
In a selected response test, aberrant responses such as careless errors and lucky guesses might cause error in ability estimation because these responses do not actually reflect the knowledge that examinees possess. In a computerized adaptive test (CAT), these aberrant responses could further cause serious estimation error due to dynamic item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Response Style (Tests)