NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Gorney; Mark D. Reckase – Journal of Educational Measurement, 2025
In computerized adaptive testing, item exposure control methods are often used to provide a more balanced usage of the item pool. Many of the most popular methods, including the restricted method (Revuelta and Ponsoda), use a single maximum exposure rate to limit the proportion of times that each item is administered. However, Barrada et al.…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Morris, Scott B.; Bass, Michael; Howard, Elizabeth; Neapolitan, Richard E. – International Journal of Testing, 2020
The standard error (SE) stopping rule, which terminates a computer adaptive test (CAT) when the "SE" is less than a threshold, is effective when there are informative questions for all trait levels. However, in domains such as patient-reported outcomes, the items in a bank might all target one end of the trait continuum (e.g., negative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Gönülates, Emre – Educational and Psychological Measurement, 2019
This article introduces the Quality of Item Pool (QIP) Index, a novel approach to quantifying the adequacy of an item pool of a computerized adaptive test for a given set of test specifications and examinee population. This index ranges from 0 to 1, with values close to 1 indicating the item pool presents optimum items to examinees throughout the…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Glas, Cees A. W.; Geerlings, Hanneke – Studies in Educational Evaluation, 2009
Pupil monitoring systems support the teacher in tailoring teaching to the individual level of a student and in comparing the progress and results of teaching with national standards. The systems are based on the availability of an item bank calibrated using item response theory. The assessment of the students' progress and results can be further…
Descriptors: Item Banks, Adaptive Testing, National Standards, Psychometrics
Georgiadou, Elissavet; Triantafillou, Evangelos; Economides, Anastasios A. – Journal of Technology, Learning, and Assessment, 2007
Since researchers acknowledged the several advantages of computerized adaptive testing (CAT) over traditional linear test administration, the issue of item exposure control has received increased attention. Due to CAT's underlying philosophy, particular items in the item pool may be presented too often and become overexposed, while other items are…
Descriptors: Adaptive Testing, Computer Assisted Testing, Scoring, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Koong, Chorng-Shiuh; Wu, Chi-Ying – Computers & Education, 2010
Multiple intelligences, with its hypothesis and implementation, have ascended to a prominent status among the many instructional methodologies. Meanwhile, pedagogical theories and concepts are in need of more alternative and interactive assessments to prove their prevalence (Kinugasa, Yamashita, Hayashi, Tominaga, & Yamasaki, 2005). In general,…
Descriptors: Multiple Intelligences, Test Items, Grading, Programming
Bergstrom, Betty A.; Stahl, John A. – 1992
This paper reports a method for assessing the adequacy of existing item banks for computer adaptive testing. The method takes into account content specifications, test length, and stopping rules, and can be used to determine if an existing item bank is adequate to administer a computer adaptive test efficiently across differing levels of examinee…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Gorin, Joanna; Dodd, Barbara; Fitzpatrick, Steven; Shieh, Yann – Applied Psychological Measurement, 2005
The primary purpose of this research is to examine the impact of estimation methods, actual latent trait distributions, and item pool characteristics on the performance of a simulated computerized adaptive testing (CAT) system. In this study, three estimation procedures are compared for accuracy of estimation: maximum likelihood estimation (MLE),…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Revuelta, Javier – Journal of Educational and Behavioral Statistics, 2004
This article presents a psychometric model for estimating ability and item-selection strategies in self-adapted testing. In contrast to computer adaptive testing, in self-adapted testing the examinees are allowed to select the difficulty of the items. The item-selection strategy is defined as the distribution of difficulty conditional on the…
Descriptors: Psychometrics, Adaptive Testing, Test Items, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Yuan H.; Schafer, William D. – Applied Psychological Measurement, 2005
Under a multidimensional item response theory (MIRT) computerized adaptive testing (CAT) testing scenario, a trait estimate (theta) in one dimension will provide clues for subsequently seeking a solution in other dimensions. This feature may enhance the efficiency of MIRT CAT's item selection and its scoring algorithms compared with its…
Descriptors: Adaptive Testing, Item Banks, Computation, Psychological Studies
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Applied Psychological Measurement, 2006
Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Format, Equated Scores
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect
Rizavi, Saba; Way, Walter D.; Lu, Ying; Pitoniak, Mary; Steffen, Manfred – Online Submission, 2004
The purpose of this study was to use realistically simulated data to evaluate various CAT designs for use with the verbal reasoning measure of the Medical College Admissions Test (MCAT). Factors such as item pool depth, content constraints, and item formats often cause repeated adaptive administrations of an item at ability levels that are not…
Descriptors: Test Items, Test Bias, Item Banks, College Admission