Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 6 |
Descriptor
Computer Assisted Testing | 11 |
Item Analysis | 11 |
Test Items | 7 |
Adaptive Testing | 5 |
Item Response Theory | 5 |
Simulation | 5 |
Difficulty Level | 4 |
Test Construction | 4 |
Models | 3 |
Item Banks | 2 |
Task Analysis | 2 |
More ▼ |
Source
Journal of Educational… | 11 |
Author
Albano, Anthony D. | 1 |
Ankenman, Robert D. | 1 |
Ariel, Adelaide | 1 |
Berger, Stéphanie | 1 |
Bergner, Yoav | 1 |
Bleiler, Timothy | 1 |
Cai, Liuhan | 1 |
Castellano, Katherine E. | 1 |
Chen, Shu-Ying | 1 |
Choi, Ikkyu | 1 |
Clough, Sara J. | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 7 |
Reports - Evaluative | 4 |
Education Level
Elementary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Berger, Stéphanie; Verschoor, Angela J.; Eggen, Theo J. H. M.; Moser, Urs – Journal of Educational Measurement, 2019
Calibration of an item bank for computer adaptive testing requires substantial resources. In this study, we investigated whether the efficiency of calibration under the Rasch model could be enhanced by improving the match between item difficulty and student ability. We introduced targeted multistage calibration designs, a design type that…
Descriptors: Simulation, Computer Assisted Testing, Test Items, Difficulty Level
Bergner, Yoav; Choi, Ikkyu; Castellano, Katherine E. – Journal of Educational Measurement, 2019
Allowance for multiple chances to answer constructed response questions is a prevalent feature in computer-based homework and exams. We consider the use of item response theory in the estimation of item characteristics and student ability when multiple attempts are allowed but no explicit penalty is deducted for extra tries. This is common…
Descriptors: Models, Item Response Theory, Homework, Computer Assisted Instruction
Joo, Seang-Hwane; Lee, Philseok; Stark, Stephen – Journal of Educational Measurement, 2018
This research derived information functions and proposed new scalar information indices to examine the quality of multidimensional forced choice (MFC) items based on the RANK model. We also explored how GGUM-RANK information, latent trait recovery, and reliability varied across three MFC formats: pairs (two response alternatives), triplets (three…
Descriptors: Item Response Theory, Models, Item Analysis, Reliability
Albano, Anthony D.; Cai, Liuhan; Lease, Erin M.; McConnell, Scott R. – Journal of Educational Measurement, 2019
Studies have shown that item difficulty can vary significantly based on the context of an item within a test form. In particular, item position may be associated with practice and fatigue effects that influence item parameter estimation. The purpose of this research was to examine the relevance of item position specifically for assessments used in…
Descriptors: Test Items, Computer Assisted Testing, Item Analysis, Difficulty Level
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A. – Journal of Educational Measurement, 2009
In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…
Descriptors: Test Items, Adaptive Testing, Item Analysis, Item Response Theory

Wainer, Howard – Journal of Educational Measurement, 1989
This paper reviews the role of the item in test construction, and suggests some new methods of item analysis. A look at dynamic, graphical item analysis is provided that uses the advantages of modern, high-speed, highly interactive computing. Several illustrations are provided. (Author/TJH)
Descriptors: Computer Assisted Testing, Computer Graphics, Graphs, Item Analysis
A Closer Look at Using Judgments of Item Difficulty to Change Answers on Computerized Adaptive Tests
Vispoel, Walter P.; Clough, Sara J.; Bleiler, Timothy – Journal of Educational Measurement, 2005
Recent studies have shown that restricting review and answer change opportunities on computerized adaptive tests (CATs) to items within successive blocks reduces time spent in review, satisfies most examinees' desires for review, and controls against distortion in proficiency estimates resulting from intentional incorrect answering of items prior…
Descriptors: Mathematics, Item Analysis, Adaptive Testing, Computer Assisted Testing
Ariel, Adelaide; Veldkamp, Bernard P.; van der Linden, Wim J. – Journal of Educational Measurement, 2004
Preventing items in adaptive testing from being over- or underexposed is one of the main problems in computerized adaptive testing. Though the problem of overexposed items can be solved using a probabilistic item-exposure control method, such methods are unable to deal with the problem of underexposed items. Using a system of rotating item pools,…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Test Construction

Tatsuoka, Kikumi K. – Journal of Educational Measurement, 1987
This study examined whether the item response curves from a two-parameter model reflected characteristics of the mathematics items, each of which required unique cognitive tasks. A computer program performed error analysis of test performance. Cognitive subtasks appeared to influence the slopes and difficulties of item response curves. (GDC)
Descriptors: Cognitive Processes, Computer Assisted Testing, Error Patterns, Item Analysis
Chen, Shu-Ying; Ankenman, Robert D. – Journal of Educational Measurement, 2004
The purpose of this study was to compare the effects of four item selection rules--(1) Fisher information (F), (2) Fisher information with a posterior distribution (FP), (3) Kullback-Leibler information with a posterior distribution (KP), and (4) completely randomized item selection (RN)--with respect to the precision of trait estimation and the…
Descriptors: Test Length, Adaptive Testing, Computer Assisted Testing, Test Selection