Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 6 |
Descriptor
Computer Assisted Testing | 9 |
Difficulty Level | 9 |
Test Items | 8 |
Item Response Theory | 6 |
Adaptive Testing | 5 |
Item Analysis | 4 |
Simulation | 4 |
Comparative Analysis | 2 |
Guessing (Tests) | 2 |
Item Banks | 2 |
Reaction Time | 2 |
More ▼ |
Source
Journal of Educational… | 9 |
Author
Albano, Anthony D. | 1 |
Berger, Stéphanie | 1 |
Bleiler, Timothy | 1 |
Bridgeman, Brent | 1 |
Cai, Liuhan | 1 |
Cline, Frederick | 1 |
Clough, Sara J. | 1 |
Cohen, Allan | 1 |
Eggen, Theo J. H. M. | 1 |
Lease, Erin M. | 1 |
Li, Feiming | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 6 |
Reports - Evaluative | 3 |
Education Level
Elementary Education | 1 |
Higher Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wyse, Adam E.; McBride, James R. – Journal of Educational Measurement, 2021
A key consideration when giving any computerized adaptive test (CAT) is how much adaptation is present when the test is used in practice. This study introduces a new framework to measure the amount of adaptation of Rasch-based CATs based on looking at the differences between the selected item locations (Rasch item difficulty parameters) of the…
Descriptors: Item Response Theory, Computer Assisted Testing, Adaptive Testing, Test Items
Berger, Stéphanie; Verschoor, Angela J.; Eggen, Theo J. H. M.; Moser, Urs – Journal of Educational Measurement, 2019
Calibration of an item bank for computer adaptive testing requires substantial resources. In this study, we investigated whether the efficiency of calibration under the Rasch model could be enhanced by improving the match between item difficulty and student ability. We introduced targeted multistage calibration designs, a design type that…
Descriptors: Simulation, Computer Assisted Testing, Test Items, Difficulty Level
Albano, Anthony D.; Cai, Liuhan; Lease, Erin M.; McConnell, Scott R. – Journal of Educational Measurement, 2019
Studies have shown that item difficulty can vary significantly based on the context of an item within a test form. In particular, item position may be associated with practice and fatigue effects that influence item parameter estimation. The purpose of this research was to examine the relevance of item position specifically for assessments used in…
Descriptors: Test Items, Computer Assisted Testing, Item Analysis, Difficulty Level
Veldkamp, Bernard P. – Journal of Educational Measurement, 2016
Many standardized tests are now administered via computer rather than paper-and-pencil format. The computer-based delivery mode brings with it certain advantages. One advantage is the ability to adapt the difficulty level of the test to the ability level of the test taker in what has been termed computerized adaptive testing (CAT). A second…
Descriptors: Computer Assisted Testing, Reaction Time, Standardized Tests, Difficulty Level
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Li, Feiming; Cohen, Allan; Shen, Linjun – Journal of Educational Measurement, 2012
Computer-based tests (CBTs) often use random ordering of items in order to minimize item exposure and reduce the potential for answer copying. Little research has been done, however, to examine item position effects for these tests. In this study, different versions of a Rasch model and different response time models were examined and applied to…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Models
A Closer Look at Using Judgments of Item Difficulty to Change Answers on Computerized Adaptive Tests
Vispoel, Walter P.; Clough, Sara J.; Bleiler, Timothy – Journal of Educational Measurement, 2005
Recent studies have shown that restricting review and answer change opportunities on computerized adaptive tests (CATs) to items within successive blocks reduces time spent in review, satisfies most examinees' desires for review, and controls against distortion in proficiency estimates resulting from intentional incorrect answering of items prior…
Descriptors: Mathematics, Item Analysis, Adaptive Testing, Computer Assisted Testing
Bridgeman, Brent; Cline, Frederick – Journal of Educational Measurement, 2004
Time limits on some computer-adaptive tests (CATs) are such that many examinees have difficulty finishing, and some examinees may be administered tests with more time-consuming items than others. Results from over 100,000 examinees suggested that about half of the examinees must guess on the final six questions of the analytical section of the…
Descriptors: Guessing (Tests), Timed Tests, Adaptive Testing, Computer Assisted Testing

Wise, Steven L.; And Others – Journal of Educational Measurement, 1992
Performance of 156 undergraduate and 48 graduate students on a self-adapted test (SFAT)--students choose the difficulty level of their test items--was compared with performance on a computer-adapted test (CAT). Those taking the SFAT obtained higher ability scores and reported lower posttest state anxiety than did CAT takers. (SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level