Publication Date
| In 2026 | 0 |
| Since 2025 | 19 |
| Since 2022 (last 5 years) | 92 |
| Since 2017 (last 10 years) | 191 |
| Since 2007 (last 20 years) | 439 |
Descriptor
| Adaptive Testing | 1052 |
| Computer Assisted Testing | 1052 |
| Test Items | 448 |
| Item Response Theory | 284 |
| Test Construction | 274 |
| Item Banks | 232 |
| Simulation | 195 |
| Comparative Analysis | 139 |
| Foreign Countries | 117 |
| Higher Education | 104 |
| Test Format | 99 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Location
| Taiwan | 11 |
| Australia | 8 |
| Netherlands | 8 |
| New York | 8 |
| Turkey | 8 |
| United Kingdom | 8 |
| California | 7 |
| Spain | 6 |
| Canada | 5 |
| China | 5 |
| Denmark | 5 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 7 |
| Education Consolidation… | 1 |
| Every Student Succeeds Act… | 1 |
| Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedCudeck, Robert A.; And Others – Educational and Psychological Measurement, 1977
TAILOR, a FORTRAN computer program for tailored testing, is described. The procedure for a joint ordering of persons and items with no pretesting as the basis for the tailored test is given, and a brief discussion of the computer program is included. (Author/JKS)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Programs, Test Construction
Peer reviewedMcCormick, Douglas J.; Cliff, Norman – Educational and Psychological Measurement, 1977
An interactive computer program for tailored testing, called TAILOR, is presented. The program runs on the APL system. A cumulative file for each examinee is established and tests are then tailored to each examinee; extensive pretesting is not necessary. (JKS)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Programs, Test Construction
Kingsbury, G. Gage; Hauser, Carl – Northwest Evaluation Association, 2004
Among the changes in education called for under the No Child Left Behind act is the need for states to test students in a number of grades and subject areas. Scores from these tests are to be used for a variety of purposes, from identifying whether individual students are proficient, to helping determine whether schools are causing adequate growth…
Descriptors: Federal Legislation, Computer Assisted Testing, Adaptive Testing, Educational Assessment
Schnipke, Deborah L.; Reese, Lynda M. – 1999
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test taker ability. This study incorporated testlets (bundles of items) into two-stage and multistage designs, and compared the precision of the ability estimates derived from these designs with those derived from a standard computerized adaptive test (CAT)…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Law Schools
Bowles, Ryan; Pommerich, Mary – 2001
Many arguments have been made against allowing examinees to review and change their answers after completing a computer adaptive test (CAT). These arguments include: (1) increased bias; (2) decreased precision; and (3) susceptibility of test-taking strategies. Results of simulations suggest that the strength of these arguments is reduced or…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Review (Reexamination)
Kim, Jong-Pil – 1999
This study was conducted to investigate the equivalence of scores from paper-and-pencil (P&P) tests and computerized tests (CTs) through meta-analysis of primary studies using both kinds of tests. For this synthesis, 51 primary studies were selected, resulting in 226 effect sizes. The first synthesis was a typical meta-analysis that treated…
Descriptors: Adaptive Testing, Computer Assisted Testing, Effect Size, Meta Analysis
Zwick, Rebecca; Thayer, Dorothy T. – 2003
This study investigated the applicability to computerized adaptive testing (CAT) data of a differential item functioning (DIF) analysis that involves an empirical Bayes (EB) enhancement of the popular Mantel Haenszel (MH) DIF analysis method. The computerized Law School Admission Test (LSAT) assumed for this study was similar to that currently…
Descriptors: Adaptive Testing, Bayesian Statistics, College Entrance Examinations, Computer Assisted Testing
Thompson, Tony D.; Davey, Tim – 2000
This paper applies specific information item selection using a method developed by T. Davey and M. Fan (2000) to a multiple-choice passage-based reading test that is being developed for computer administration. Data used to calibrate the multidimensional item parameters for the simulation study consisted of item responses from randomly equivalent…
Descriptors: Adaptive Testing, Computer Assisted Testing, Reading Tests, Selection
Habick, Timothy – 1999
With the advent of computer-based testing (CBT) and the need to increase the number of items available in computer adaptive test pools, the idea of item variants was conceived. An item variant can be defined as an item with content based on an existing item to a greater or lesser degree. Item variants were first proposed as a way to enhance test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Test Construction
Raiche, Gilles; Blais, Jean-Guy – 2002
In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Item Response Theory
Optimal Stratification of Item Pools in a-Stratified Computerized Adaptive Testing. Research Report.
van der Linden, Wim J. – 2000
A method based on 0-1 linear programming (LP) is presented to stratify an item pool optimally for use in "alpha"-stratified adaptive testing. Because the 0-1 LP model belongs to the subclass of models with a network-flow structure, efficient solutions are possible. The method is applied to a previous item pool from the computerized…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Linear Programming
Peer reviewedMcKinley, Robert L.; Reckase, Mark D. – AEDS Journal, 1980
Describes tailored testing (in which a computer selects appropriate items from an item bank while an examinee is taking a test) and shows it to be superior to paper-and-pencil tests in such areas as reliability, security, and appropriateness of items. (IRT)
Descriptors: Adaptive Testing, Computer Assisted Testing, Higher Education, Program Evaluation
Peer reviewedGlas, Cees A. W.; van der Linden, Wim J. – Applied Psychological Measurement, 2003
Developed a multilevel item response (IRT) model that allows for differences between the distributions of item parameters of families of item clones. Results from simulation studies based on an item pool from the Law School Admission Test illustrate the accuracy of the item pool calibration and adaptive testing procedures based on the model. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory
Peer reviewedLaatsch, Linda; Choca, James – Psychological Assessment, 1994
The authors propose using cluster analysis to develop a branching logic that would allow the adaptive administration of psychological instruments. The proposed methodology is described in detail and used to develop an adaptive version of the Halstead Category Test from archival data. (SLD)
Descriptors: Adaptive Testing, Cluster Analysis, Computer Assisted Testing, Psychological Testing
Peer reviewedvan der Linden, Wim J. – Psychometrika, 1998
This paper suggests several item selection criteria for adaptive testing that are all based on the use of the true posterior. Some of the ability estimators produced by these criteria are discussed and empirically criticized. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing

Direct link
