Publication Date
| In 2026 | 0 |
| Since 2025 | 25 |
| Since 2022 (last 5 years) | 121 |
| Since 2017 (last 10 years) | 250 |
| Since 2007 (last 20 years) | 576 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 38 |
| Practitioners | 25 |
| Teachers | 8 |
| Administrators | 6 |
| Counselors | 3 |
| Policymakers | 2 |
| Parents | 1 |
| Students | 1 |
Location
| Taiwan | 12 |
| United Kingdom | 10 |
| Australia | 9 |
| Netherlands | 9 |
| California | 8 |
| New York | 8 |
| Turkey | 8 |
| Germany | 7 |
| Canada | 6 |
| Florida | 6 |
| Japan | 6 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kim, Jong-Pil – 1999
This study was conducted to investigate the equivalence of scores from paper-and-pencil (P&P) tests and computerized tests (CTs) through meta-analysis of primary studies using both kinds of tests. For this synthesis, 51 primary studies were selected, resulting in 226 effect sizes. The first synthesis was a typical meta-analysis that treated…
Descriptors: Adaptive Testing, Computer Assisted Testing, Effect Size, Meta Analysis
School Renaissance Inst., Inc., Madison, WI. – 2000
A study evaluated comparatively the Scholastic Reading Inventory (SRI) Interactive Test and Advantage Learning Systems' STAR Reading Computer-Adaptive Standardized Test. Due to the different methods used for collecting and calculating norm-referenced scores in the two tests, scale score measures of reading performance were used for the comparative…
Descriptors: Adaptive Testing, Comparative Analysis, Comparative Testing, Elementary Secondary Education
Zwick, Rebecca; Thayer, Dorothy T. – 2003
This study investigated the applicability to computerized adaptive testing (CAT) data of a differential item functioning (DIF) analysis that involves an empirical Bayes (EB) enhancement of the popular Mantel Haenszel (MH) DIF analysis method. The computerized Law School Admission Test (LSAT) assumed for this study was similar to that currently…
Descriptors: Adaptive Testing, Bayesian Statistics, College Entrance Examinations, Computer Assisted Testing
Thompson, Tony D.; Davey, Tim – 2000
This paper applies specific information item selection using a method developed by T. Davey and M. Fan (2000) to a multiple-choice passage-based reading test that is being developed for computer administration. Data used to calibrate the multidimensional item parameters for the simulation study consisted of item responses from randomly equivalent…
Descriptors: Adaptive Testing, Computer Assisted Testing, Reading Tests, Selection
Habick, Timothy – 1999
With the advent of computer-based testing (CBT) and the need to increase the number of items available in computer adaptive test pools, the idea of item variants was conceived. An item variant can be defined as an item with content based on an existing item to a greater or lesser degree. Item variants were first proposed as a way to enhance test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Test Construction
Raiche, Gilles; Blais, Jean-Guy – 2002
In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Item Response Theory
Optimal Stratification of Item Pools in a-Stratified Computerized Adaptive Testing. Research Report.
van der Linden, Wim J. – 2000
A method based on 0-1 linear programming (LP) is presented to stratify an item pool optimally for use in "alpha"-stratified adaptive testing. Because the 0-1 LP model belongs to the subclass of models with a network-flow structure, efficient solutions are possible. The method is applied to a previous item pool from the computerized…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Linear Programming
Peer reviewedMcKinley, Robert L.; Reckase, Mark D. – AEDS Journal, 1980
Describes tailored testing (in which a computer selects appropriate items from an item bank while an examinee is taking a test) and shows it to be superior to paper-and-pencil tests in such areas as reliability, security, and appropriateness of items. (IRT)
Descriptors: Adaptive Testing, Computer Assisted Testing, Higher Education, Program Evaluation
Peer reviewedGlas, Cees A. W.; van der Linden, Wim J. – Applied Psychological Measurement, 2003
Developed a multilevel item response (IRT) model that allows for differences between the distributions of item parameters of families of item clones. Results from simulation studies based on an item pool from the Law School Admission Test illustrate the accuracy of the item pool calibration and adaptive testing procedures based on the model. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory
Peer reviewedSchnipke, Deborah L.; Green, Bert F. – Journal of Educational Measurement, 1995
Two item selection algorithms, one based on maximal differentiation between examinees and one based on item response theory and maximum information for each examinee, were compared in simulated linear and adaptive tests of cognitive ability. Adaptive tests based on maximum information were clearly superior. (SLD)
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Item Response Theory
Peer reviewedLaatsch, Linda; Choca, James – Psychological Assessment, 1994
The authors propose using cluster analysis to develop a branching logic that would allow the adaptive administration of psychological instruments. The proposed methodology is described in detail and used to develop an adaptive version of the Halstead Category Test from archival data. (SLD)
Descriptors: Adaptive Testing, Cluster Analysis, Computer Assisted Testing, Psychological Testing
Peer reviewedvan der Linden, Wim J. – Psychometrika, 1998
This paper suggests several item selection criteria for adaptive testing that are all based on the use of the true posterior. Some of the ability estimators produced by these criteria are discussed and empirically criticized. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewedMay, Kim; Nicewander, W. Alan – Educational and Psychological Measurement, 1998
The degree to which scale distortion in the ordinary difference score can be removed by using differences based on estimated examinee proficiency (theta) in either conventional or adaptive testing situations was studied using Item Response Theory. Using estimated thetas removed much scale distortion for both conventional and adaptive tests. (SLD)
Descriptors: Ability, Achievement Gains, Adaptive Testing, Estimation (Mathematics)
Peer reviewedNeuman, George; Baydoun, Ramzi – Applied Psychological Measurement, 1998
Studied the cross-mode equivalence of paper-and-pencil and computer-based clerical tests with 141 undergraduates. Found no differences across modes for the two types of tests. Differences can be minimized when speeded computerized tests follow the same administration and response procedures as the paper format. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Higher Education
Peer reviewedVispoel, Walter P. – Journal of Educational Measurement, 1998
Compared results from computer-adaptive and self-adaptive tests under conditions in which item review was and was not permitted for 379 college students. Results suggest that, when given the opportunity, most examinees will change answers, but usually only to a small portion of items, resulting in some benefit to the test taker. (SLD)
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Higher Education


