Publication Date
In 2025 | 14 |
Since 2024 | 52 |
Since 2021 (last 5 years) | 132 |
Since 2016 (last 10 years) | 266 |
Since 2006 (last 20 years) | 587 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Researchers | 38 |
Practitioners | 25 |
Teachers | 8 |
Administrators | 6 |
Counselors | 3 |
Policymakers | 2 |
Parents | 1 |
Students | 1 |
Location
Taiwan | 12 |
United Kingdom | 10 |
Netherlands | 9 |
Australia | 8 |
California | 8 |
New York | 8 |
Turkey | 8 |
Germany | 7 |
Canada | 6 |
Florida | 6 |
Japan | 6 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – 2003
This paper proposes an item selection algorithm that can be used to neutralize the effect of time limits in computer adaptive testing. The method is based on a statistical model for the response-time distributions of the test takers on the items in the pool that is updated each time a new item has been administered. Predictions from the model are…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Linear Programming
Pommerich, Mary; Segall, Daniel O. – 2003
Research discussed in this paper was conducted as part of an ongoing large-scale simulation study to evaluate methods of calibrating pretest items for computerized adaptive testing (CAT) pools. The simulation was designed to mimic the operational CAT Armed Services Vocational Aptitude Battery (ASVAB) testing program, in which a single pretest item…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Maximum Likelihood Statistics
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – 2000
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. For computerized adaptive tests (CAT) with dichotomous items, several person-fit statistics for detecting nonfitting item score patterns have been proposed. Both for paper-and-pencil (P&P) test and CATs, detection of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Response Theory
Zhu, Daming; Fan, Meichu – 1999
The convention for selecting starting points (that is, initial items) on a computerized adaptive test (CAT) is to choose as starting points items of medium difficulty for all examinees. Selecting a starting point based on prior information about an individual's ability was first suggested many years ago, but has been believed unimportant provided…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level

Roos, Linda L.; And Others – Educational and Psychological Measurement, 1996
This article describes Minnesota Computerized Adaptive Testing Language program code for using the MicroCAT 3.5 testing software to administer several types of self-adapted tests. Code is provided for: a basic self-adapted test; a self-adapted version of an adaptive mastery test; and a restricted self-adapted test. (Author/SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Mastery Tests, Programming

Stocking, Martha L. – Journal of Educational and Behavioral Statistics, 1996
An alternative method for scoring adaptive tests, based on number-correct scores, is explored and compared with a method that relies more directly on item response theory. Using the number-correct score with necessary adjustment for intentional differences in adaptive test difficulty is a statistically viable scoring method. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Response Theory

Potenza, Maria T.; Stocking, Martha L. – Journal of Educational Measurement, 1997
Common strategies for dealing with flawed items in conventional testing, grounded in principles of fairness to examinees, are re-examined in the context of adaptive testing. The additional strategy of retesting from a pool cleansed of flawed items is found, through a Monte Carlo study, to bring about no practical improvement. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Monte Carlo Methods

Warm, Thomas A. – Psychometrika, 1989
A new estimation method, Weighted Likelihood Estimation (WLE), is derived mathematically. Two Monte Carlo studies compare WLE with maximum likelihood estimation and Bayesian modal estimation of ability in conventional tests and tailored tests. Advantages of WLE are discussed. (SLD)
Descriptors: Ability, Adaptive Testing, Equations (Mathematics), Estimation (Mathematics)

Folk, Valerie Greaud; Green, Bert F. – Applied Psychological Measurement, 1989
Some effects of using unidimensional item response theory (IRT) were examined when the assumption of unidimensionality was violated. Adaptive and nonadaptive tests were used. It appears that use of a unidimensional model can bias parameter estimation, adaptive item selection, and ability estimation for the two types of testing. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Computer Simulation

Dodd, Barbara G.; And Others – Applied Psychological Measurement, 1989
General guidelines are developed to assist practitioners in devising operational computerized adaptive testing systems based on the graded response model. The effects of the following major variables were examined: item pool size; stepsize used along the trait continuum until maximum likelihood estimation could be calculated; and stopping rule…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Item Banks

Reckase, Mark D. – Educational Measurement: Issues and Practice, 1989
Requirements for adaptive testing are reviewed, and the reasons implementation has taken so long are explored. The adaptive test is illustrated through the Stanford-Binet Intelligence Scale of L. M. Terman and M. A. Merrill (1960). Current adaptive testing is tied to the development of item response theory. (SLD)
Descriptors: Adaptive Testing, Educational Development, Elementary Secondary Education, Latent Trait Theory

Kingsbury, G. Gage; Zara, Anthony R. – Applied Measurement in Education, 1991
This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Computer Simulation

Birenbaum, Menucha – Studies in Educational Evaluation, 1994
A scheme is introduced for classifying assessment methods by using a mapping sentence and examples of three tasks from research methodology are provided along with their profiles (structures) based on the mapping sentence. An instrument to determine student assessment preferences is presented and explored. (SLD)
Descriptors: Adaptive Testing, Classification, Educational Assessment, Measures (Individuals)

Wang, Tianyou; Vispoel, Walter P. – Journal of Educational Measurement, 1998
Used simulations of computerized adaptive tests to evaluate results yielded by four commonly used ability estimation methods: maximum likelihood estimation (MLE) and three Bayesian approaches. Results show clear distinctions between MLE and Bayesian methods. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing

van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 1999
Proposes an algorithm that minimizes the asymptotic variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The criterion results in a closed-form expression that is easy to evaluate. Also shows how the algorithm can be modified if the interest is in a test with a "simple ability structure."…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing