Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 12 |
Descriptor
Adaptive Testing | 22 |
Comparative Analysis | 22 |
Selection | 22 |
Computer Assisted Testing | 21 |
Test Items | 21 |
Methods | 7 |
Item Banks | 6 |
Simulation | 5 |
Test Construction | 5 |
Item Response Theory | 4 |
Computation | 3 |
More ▼ |
Source
Educational and Psychological… | 5 |
Applied Psychological… | 4 |
Journal of Educational… | 4 |
Applied Measurement in… | 2 |
Eurasian Journal of… | 1 |
Online Submission | 1 |
Psicologica: International… | 1 |
Author
Dodd, Barbara G. | 4 |
Chang, Hua-Hua | 3 |
Cheng, Ying | 2 |
Diao, Qi | 2 |
Hauser, Carl | 2 |
He, Wei | 2 |
Ponsoda, Vicente | 2 |
Abad, Francisco Jose | 1 |
Ansley, Timothy | 1 |
Barrada, Juan Ramon | 1 |
Deng, Hui | 1 |
More ▼ |
Publication Type
Journal Articles | 17 |
Reports - Research | 15 |
Reports - Evaluative | 8 |
Speeches/Meeting Papers | 3 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
Law School Admission Test | 1 |
What Works Clearinghouse Rating
Sahin, Alper; Ozbasi, Durmus – Eurasian Journal of Educational Research, 2017
Purpose: This study aims to reveal effects of content balancing and item selection method on ability estimation in computerized adaptive tests by comparing Fisher's maximum information (FMI) and likelihood weighted information (LWI) methods. Research Methods: Four groups of examinees (250, 500, 750, 1000) and a bank of 500 items with 10 different…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Content
He, Wei; Diao, Qi; Hauser, Carl – Educational and Psychological Measurement, 2014
This study compared four item-selection procedures developed for use with severely constrained computerized adaptive tests (CATs). Severely constrained CATs refer to those adaptive tests that seek to meet a complex set of constraints that are often not conclusive to each other (i.e., an item may contribute to the satisfaction of several…
Descriptors: Comparative Analysis, Test Items, Selection, Computer Assisted Testing
Cheng, Ying; Patton, Jeffrey M.; Shao, Can – Educational and Psychological Measurement, 2015
a-Stratified computerized adaptive testing with b-blocking (AST), as an alternative to the widely used maximum Fisher information (MFI) item selection method, can effectively balance item pool usage while providing accurate latent trait estimates in computerized adaptive testing (CAT). However, previous comparisons of these methods have treated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Yao, Lihua – Journal of Educational Measurement, 2014
The intent of this research was to find an item selection procedure in the multidimensional computer adaptive testing (CAT) framework that yielded higher precision for both the domain and composite abilities, had a higher usage of the item pool, and controlled the exposure rate. Five multidimensional CAT item selection procedures (minimum angle;…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
He, Wei; Diao, Qi; Hauser, Carl – Online Submission, 2013
This study compares the four existing procedures handling the item selection in severely constrained computerized adaptive tests (CAT). These procedures include weighted deviation model (WDM), weighted penalty model (WPM), maximum priority index (MPI), and shadow test approach (STA). Severely constrained CAT refer to those adaptive tests seeking…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Statistical Analysis
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2015
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…
Descriptors: Computer Assisted Testing, Adaptive Testing, Accuracy, Fidelity
Ho, Tsung-Han; Dodd, Barbara G. – Applied Measurement in Education, 2012
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Deng, Hui; Ansley, Timothy; Chang, Hua-Hua – Journal of Educational Measurement, 2010
In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…
Descriptors: Computer Assisted Testing, Adaptive Testing, Selection, Methods
Veldkamp, Bernard P. – Psicologica: International Journal of Methodology and Experimental Psychology, 2010
Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…
Descriptors: Selection, Criteria, Bayesian Statistics, Computer Assisted Testing
Barrada, Juan Ramon; Olea, Julio; Ponsoda, Vicente; Abad, Francisco Jose – Applied Psychological Measurement, 2010
In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or…
Descriptors: Test Items, Simulation, Adaptive Testing, Item Analysis
Cheng, Ying; Chang, Hua-Hua; Douglas, Jeffrey; Guo, Fanmin – Educational and Psychological Measurement, 2009
a-stratification is a method that utilizes items with small discrimination (a) parameters early in an exam and those with higher a values when more is learned about the ability parameter. It can achieve much better item usage than the maximum information criterion (MIC). To make a-stratification more practical and more widely applicable, a method…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection

Pastor, Dena A.; Dodd, Barbara G.; Chang, Hua-Hua – Applied Psychological Measurement, 2002
Studied the impact of using five different exposure control algorithms in two sizes of item pool calibrated using the generalized partial credit model. Simulation results show that the a-stratified design, in comparison to a no-exposure control condition, could be used to reduce item exposure and overlap and increase pool use, while degrading…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Item Banks

Revuelta, Javier; Ponsoda, Vicente – Journal of Educational Measurement, 1998
Proposes two new methods for item-exposure control, the Progressive method and the Restricted Maximum Information method. Compares both methods with six other item-selection methods. Discusses advantages of the two new methods and the usefulness of combining them. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Selection

Schnipke, Deborah L.; Green, Bert F. – Journal of Educational Measurement, 1995
Two item selection algorithms, one based on maximal differentiation between examinees and one based on item response theory and maximum information for each examinee, were compared in simulated linear and adaptive tests of cognitive ability. Adaptive tests based on maximum information were clearly superior. (SLD)
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Item Response Theory
Previous Page | Next Page ยป
Pages: 1 | 2