Publication Date
| In 2026 | 0 |
| Since 2025 | 25 |
| Since 2022 (last 5 years) | 121 |
| Since 2017 (last 10 years) | 250 |
| Since 2007 (last 20 years) | 576 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 38 |
| Practitioners | 25 |
| Teachers | 8 |
| Administrators | 6 |
| Counselors | 3 |
| Policymakers | 2 |
| Parents | 1 |
| Students | 1 |
Location
| Taiwan | 12 |
| United Kingdom | 10 |
| Australia | 9 |
| Netherlands | 9 |
| California | 8 |
| New York | 8 |
| Turkey | 8 |
| Germany | 7 |
| Canada | 6 |
| Florida | 6 |
| Japan | 6 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Luecht, Richard M. – 2003
This paper presents a multistage adaptive testing test development paradigm that promises to handle content balancing and other test development needs, psychometric reliability concerns, and item exposure. The bundled multistage adaptive testing (BMAT) framework is a modification of the computer-adaptive sequential testing framework introduced by…
Descriptors: Adaptive Testing, Computer Assisted Testing, High Stakes Tests, Mastery Tests
Chang, Shu-Ren; Plake, Barbara S.; Ferdous, Abdullah A. – Online Submission, 2005
This study examined the time different ability level examinees spend taking a CAT on demanding items to these examinees. It was also found that high able examinees spend more time on the pretest items, which are not tailored to the examinees' ability level, than do lower ability examinees. Higher able examinees showed persistence with test…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Reaction Time
Patsula, Liane N.; Steffen, Mandred – 1997
One challenge associated with computerized adaptive testing (CAT) is the maintenance of test and item security while allowing for daily testing. An alternative to continually creating new pools containing an independent set of items would be to consider each CAT pool as a sample of items from a larger collection (referred to as a VAT) rather than…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Multiple Choice Tests
Patsula, Liane N.; Pashley, Peter J. – 1997
Many large-scale testing programs routinely pretest new items alongside operational (or scored) items to determine their empirical characteristics. If these pretest items pass certain statistical criteria, they are placed into an operational item pool; otherwise they are edited and re-pretested or simply discarded. In these situations, reliable…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Banks
Segall, Daniel O. – 1999
Two new methods for improving the measurement precision of a general test factor are proposed and evaluated. One new method provides a multidimensional item response theory estimate obtained from conventional administrations of multiple-choice test items that span general and nuisance dimensions. The other method chooses items adaptively to…
Descriptors: Ability, Adaptive Testing, Item Response Theory, Measurement Techniques
van der Linden, Wim J. – 1999
A constrained computerized adaptive testing (CAT) algorithm is presented that automatically equates the number-correct scores on adaptive tests. The algorithm can be used to equate number-correct scores across different administrations of the same adaptive test as well as to an external reference test. The constraints are derived from a set of…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Blais, Jean-Guy; Raiche, Gilles – 2002
This paper examines some characteristics of the statistics associated with the sampling distribution of the proficiency level estimate when the Rasch model is used. These characteristics allow the judgment of the meaning to be given to the proficiency level estimate obtained in adaptive testing, and as a consequence, they can illustrate the…
Descriptors: Ability, Adaptive Testing, Error of Measurement, Estimation (Mathematics)
Peer reviewedCudeck, Robert; And Others – Applied Psychological Measurement, 1980
Tailored testing by Cliff's method of implied orders was simulated through the use of responses gathered during conventional administration of the Stanford-Binet Intelligence Scale. Tailoring eliminated approximately half the responses with only modest decreases in score reliability. (Author/BW)
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Intelligence Tests
Peer reviewedSpineti, John P.; Hambleton, Ronald K. – Educational and Psychological Measurement, 1977
The effectiveness of various tailored testing strategies for use in objective based instructional programs was investigated. The three factors of a tailored testing strategy under study with various hypothetical distributions of abilities across two learning hierarchies were test length, mastery cutting score, and starting point. (Author/JKS)
Descriptors: Adaptive Testing, Computer Assisted Testing, Criterion Referenced Tests, Cutting Scores
Peer reviewedChen, Ssu-Kuang; And Others – Educational and Psychological Measurement, 1997
A simulation study explored the effect of population distribution on maximum likelihood estimation (MLE) and expected a posteriori (EAP) estimation in computerized adaptive testing based on the rating scale model of D. Andrich (1978). The choice between EAP and MLE for particular situations is discussed. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedEignor, Daniel R. – Journal of Educational Measurement, 1997
The authors of the "Guidelines," a task force of eight, intend to present an organized list of features to be considered in reporting or evaluating computerized-adaptive assessments. Apart from a few weaknesses, the book is a useful and complete document that will be very helpful to test developers. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Evaluation Methods, Guidelines
Peer reviewedKoch, William R.; Dodd, Barbara G. – Educational and Psychological Measurement, 1995
Basic procedures for performing computerized adaptive testing based on the successive intervals (SI) Rasch model were evaluated. The SI model was applied to simulated and real attitude data sets. Item pools as small as 30 items performed well, and the model appeared practical for Likert-type data. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory
Peer reviewedWise, Steven L.; Finney, Sara J.; Enders, Craig K.; Freeman, Sharon A.; Severance, Donald D. – Applied Measurement in Education, 1999
Examined whether providing item review on a computerized adaptive test could be used by examinees to inflate their scores. Two studies involving 139 undergraduates suggest that examinees are not highly proficient at discriminating item difficulty. A simulation study showed the usefulness of a strategy identified by G. Kingsbury (1996) as a way to…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Higher Education
Peer reviewedMooney, G. A.; Bligh, J. G.; Leinster, S. J. – Medical Teacher, 1998
Presents a system of classification for describing computer-based assessment techniques based on the level of action and educational activity they offer. Illustrates 10 computer-based assessment techniques and discusses their educational value. Contains 14 references. (Author)
Descriptors: Adaptive Testing, Classification, Computer Assisted Testing, Foreign Countries
Peer reviewedVispoel, Walter P. – Journal of Educational Measurement, 1998
Studied effects of administration mode [computer adaptive test (CAT) versus self-adaptive test (SAT)], item-by-item answer feedback, and test anxiety on results from computerized vocabulary tests taken by 293 college students. CATs were more reliable than SATs, and administration time was less when feedback was provided. (SLD)
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Feedback


