NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Beyza Aksu Dunya; Stefanie Wind – International Journal of Testing, 2025
We explored the practicality of relatively small item pools in the context of low-stakes Computer-Adaptive Testing (CAT), such as CAT procedures that might be used for quick diagnostic or screening exams. We used a basic CAT algorithm without content balancing and exposure control restrictions to reflect low stakes testing scenarios. We examined…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kezer, Fatih – International Journal of Assessment Tools in Education, 2021
Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Li, Sylvia; Meyer, Patrick – NWEA, 2019
This simulation study examines the measurement precision, item exposure rates, and the depth of the MAP® Growth™ item pools under various grade-level restrictions. Unlike most summative assessments, MAP Growth allows examinees to see items from any grade level, regardless of the examinee's actual grade level. It does not limit the test to items…
Descriptors: Achievement Tests, Item Banks, Test Items, Instructional Program Divisions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Alper; Weiss, David J. – Educational Sciences: Theory and Practice, 2015
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Sample Size, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chun; Chang, Hua-Hua; Boughton, Keith A. – Applied Psychological Measurement, 2013
Multidimensional computerized adaptive testing (MCAT) is able to provide a vector of ability estimates for each examinee, which could be used to provide a more informative profile of an examinee's performance. The current literature on MCAT focuses on the fixed-length tests, which can generate less accurate results for those examinees whose…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Length, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Jiao, Hong; Macready, George; Liu, Junhui; Cho, Youngmi – Applied Psychological Measurement, 2012
This study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback-Leibler (KL) information were proposed and compared with the reversed and the adaptive KL information under simulated testing conditions. When item separation was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HwaYoung; Dodd, Barbara G. – Educational and Psychological Measurement, 2012
This study investigated item exposure control procedures under various combinations of item pool characteristics and ability distributions in computerized adaptive testing based on the partial credit model. Three variables were manipulated: item pool characteristics (120 items for each of easy, medium, and hard item pools), two ability…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Reckase, Mark D. – Educational and Psychological Measurement, 2014
For computerized adaptive tests (CATs) to work well, they must have an item pool with sufficient numbers of good quality items. Many researchers have pointed out that, in developing item pools for CATs, not only is the item pool size important but also the distribution of item parameters and practical considerations such as content distribution…
Descriptors: Item Banks, Test Length, Computer Assisted Testing, Adaptive Testing
Ackerman, Terry A.; Davey, Tim C. – 1991
An adaptive test can usually match or exceed the measurement precision of conventional tests several times its length. This increased efficiency is not without costs, however, as the models underlying adaptive testing make strong assumptions about examinees and items. Most troublesome is the assumption that item pools are unidimensional. Truly…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Equations (Mathematics)
Zhu, Daming; Fan, Meichu – 1999
The convention for selecting starting points (that is, initial items) on a computerized adaptive test (CAT) is to choose as starting points items of medium difficulty for all examinees. Selecting a starting point based on prior information about an individual's ability was first suggested many years ago, but has been believed unimportant provided…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Parshall, Cynthia G.; Kromrey, Jeffrey D.; Harmes, J. Christine; Sentovich, Christina – 2001
Computerized adaptive tests (CATs) are efficient because of their optimal item selection procedures that target maximally informative items at each estimated ability level. However, operational administration of these optimal CATs results in a relatively small subset of items given to examinees too often, while another portion of the item pool is…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Patsula, Liane N.; Pashley, Peter J. – 1997
Many large-scale testing programs routinely pretest new items alongside operational (or scored) items to determine their empirical characteristics. If these pretest items pass certain statistical criteria, they are placed into an operational item pool; otherwise they are edited and re-pretested or simply discarded. In these situations, reliable…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Banks
Peer reviewed Peer reviewed
Wang, Shudong; Wang, Tianyou – Applied Psychological Measurement, 2001
Evaluated the relative accuracy of the weighted likelihood estimate (WLE) of T. Warm (1989) compared to the maximum likelihood estimate (MLE), expected a posteriori estimate, and maximum a posteriori estimate. Results of the Monte Carlo study, which show the relative advantages of each approach, suggest that the test termination rule has more…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Bergstrom, Betty A.; Stahl, John A. – 1992
This paper reports a method for assessing the adequacy of existing item banks for computer adaptive testing. The method takes into account content specifications, test length, and stopping rules, and can be used to determine if an existing item bank is adequate to administer a computer adaptive test efficiently across differing levels of examinee…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Nandakumar, Ratna; Roussos, Louis – 2001
Computerized adaptive tests (CATs) pose major obstacles to the traditional assessment of differential item functioning (DIF). This paper proposes a modification of the SIBTEST DIF procedure for CATs, called CATSIB. CATSIB matches test takers on estimated ability based on unidimensional item response theory. To control for impact-induced Type I…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Identification
Previous Page | Next Page »
Pages: 1  |  2