Publication Date
| In 2026 | 0 |
| Since 2025 | 19 |
| Since 2022 (last 5 years) | 92 |
| Since 2017 (last 10 years) | 191 |
| Since 2007 (last 20 years) | 439 |
Descriptor
| Adaptive Testing | 1052 |
| Computer Assisted Testing | 1052 |
| Test Items | 448 |
| Item Response Theory | 284 |
| Test Construction | 274 |
| Item Banks | 232 |
| Simulation | 195 |
| Comparative Analysis | 139 |
| Foreign Countries | 117 |
| Higher Education | 104 |
| Test Format | 99 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Location
| Taiwan | 11 |
| Australia | 8 |
| Netherlands | 8 |
| New York | 8 |
| Turkey | 8 |
| United Kingdom | 8 |
| California | 7 |
| Spain | 6 |
| Canada | 5 |
| China | 5 |
| Denmark | 5 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 7 |
| Education Consolidation… | 1 |
| Every Student Succeeds Act… | 1 |
| Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
van der Linden, Wim J. – 1999
A constrained computerized adaptive testing (CAT) algorithm is presented that automatically equates the number-correct scores on adaptive tests. The algorithm can be used to equate number-correct scores across different administrations of the same adaptive test as well as to an external reference test. The constraints are derived from a set of…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewedCudeck, Robert; And Others – Applied Psychological Measurement, 1980
Tailored testing by Cliff's method of implied orders was simulated through the use of responses gathered during conventional administration of the Stanford-Binet Intelligence Scale. Tailoring eliminated approximately half the responses with only modest decreases in score reliability. (Author/BW)
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Intelligence Tests
Peer reviewedSpineti, John P.; Hambleton, Ronald K. – Educational and Psychological Measurement, 1977
The effectiveness of various tailored testing strategies for use in objective based instructional programs was investigated. The three factors of a tailored testing strategy under study with various hypothetical distributions of abilities across two learning hierarchies were test length, mastery cutting score, and starting point. (Author/JKS)
Descriptors: Adaptive Testing, Computer Assisted Testing, Criterion Referenced Tests, Cutting Scores
Peer reviewedChen, Ssu-Kuang; And Others – Educational and Psychological Measurement, 1997
A simulation study explored the effect of population distribution on maximum likelihood estimation (MLE) and expected a posteriori (EAP) estimation in computerized adaptive testing based on the rating scale model of D. Andrich (1978). The choice between EAP and MLE for particular situations is discussed. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedEignor, Daniel R. – Journal of Educational Measurement, 1997
The authors of the "Guidelines," a task force of eight, intend to present an organized list of features to be considered in reporting or evaluating computerized-adaptive assessments. Apart from a few weaknesses, the book is a useful and complete document that will be very helpful to test developers. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Evaluation Methods, Guidelines
Peer reviewedKoch, William R.; Dodd, Barbara G. – Educational and Psychological Measurement, 1995
Basic procedures for performing computerized adaptive testing based on the successive intervals (SI) Rasch model were evaluated. The SI model was applied to simulated and real attitude data sets. Item pools as small as 30 items performed well, and the model appeared practical for Likert-type data. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory
Peer reviewedWise, Steven L.; Finney, Sara J.; Enders, Craig K.; Freeman, Sharon A.; Severance, Donald D. – Applied Measurement in Education, 1999
Examined whether providing item review on a computerized adaptive test could be used by examinees to inflate their scores. Two studies involving 139 undergraduates suggest that examinees are not highly proficient at discriminating item difficulty. A simulation study showed the usefulness of a strategy identified by G. Kingsbury (1996) as a way to…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Higher Education
Peer reviewedMooney, G. A.; Bligh, J. G.; Leinster, S. J. – Medical Teacher, 1998
Presents a system of classification for describing computer-based assessment techniques based on the level of action and educational activity they offer. Illustrates 10 computer-based assessment techniques and discusses their educational value. Contains 14 references. (Author)
Descriptors: Adaptive Testing, Classification, Computer Assisted Testing, Foreign Countries
Peer reviewedVispoel, Walter P. – Journal of Educational Measurement, 1998
Studied effects of administration mode [computer adaptive test (CAT) versus self-adaptive test (SAT)], item-by-item answer feedback, and test anxiety on results from computerized vocabulary tests taken by 293 college students. CATs were more reliable than SATs, and administration time was less when feedback was provided. (SLD)
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Feedback
Peer reviewedWang, Shudong; Wang, Tianyou – Applied Psychological Measurement, 2001
Evaluated the relative accuracy of the weighted likelihood estimate (WLE) of T. Warm (1989) compared to the maximum likelihood estimate (MLE), expected a posteriori estimate, and maximum a posteriori estimate. Results of the Monte Carlo study, which show the relative advantages of each approach, suggest that the test termination rule has more…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedReise, Steven P. – Applied Psychological Measurement, 2001
This book contains a series of research articles about computerized adaptive testing (CAT) written for advanced psychometricians. The book is divided into sections on: (1) item selection and examinee scoring in CAT; (2) examples of CAT applications; (3) item banks; (4) determining model fit; and (5) using testlets in CAT. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Banks
Peer reviewedWang, Tianyou; Hanson, Bradley A.; Lau, Che-Ming A. – Applied Psychological Measurement, 1999
Extended the use of a beta prior in trait estimation to the maximum expected a posteriori (MAP) method of Bayesian estimation. This new method, essentially unbiased MAP, was compared with MAP, essentially unbiased expected a posteriori, weighted likelihood, and maximum-likelihood estimation methods. The new method significantly reduced bias in…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedvan der Linden, Wim J. – Applied Psychological Measurement, 1999
Proposes a procedure for empirical initialization of the trait (theta) estimator in adaptive testing that is based on the statistical relation between theta and background variables known prior to test administration. Illustrates the procedure for an adaptive version of a test from the Dutch General Aptitude Battery. (SLD)
Descriptors: Adaptive Testing, Aptitude Tests, Bayesian Statistics, Computer Assisted Testing
A Closer Look at Using Judgments of Item Difficulty to Change Answers on Computerized Adaptive Tests
Vispoel, Walter P.; Clough, Sara J.; Bleiler, Timothy – Journal of Educational Measurement, 2005
Recent studies have shown that restricting review and answer change opportunities on computerized adaptive tests (CATs) to items within successive blocks reduces time spent in review, satisfies most examinees' desires for review, and controls against distortion in proficiency estimates resulting from intentional incorrect answering of items prior…
Descriptors: Mathematics, Item Analysis, Adaptive Testing, Computer Assisted Testing
Ferdous, Abdullah A.; Plake, Barbara S.; Chang, Shu-Ren – Educational Assessment, 2007
The purpose of this study was to examine the effect of pretest items on response time in an operational, fixed-length, time-limited computerized adaptive test (CAT). These pretest items are embedded within the CAT, but unlike the operational items, are not tailored to the examinee's ability level. If examinees with higher ability levels need less…
Descriptors: Pretests Posttests, Reaction Time, Computer Assisted Testing, Test Items

Direct link
