Publication Date
| In 2026 | 0 |
| Since 2025 | 25 |
| Since 2022 (last 5 years) | 121 |
| Since 2017 (last 10 years) | 250 |
| Since 2007 (last 20 years) | 576 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 38 |
| Practitioners | 25 |
| Teachers | 8 |
| Administrators | 6 |
| Counselors | 3 |
| Policymakers | 2 |
| Parents | 1 |
| Students | 1 |
Location
| Taiwan | 12 |
| United Kingdom | 10 |
| Australia | 9 |
| Netherlands | 9 |
| California | 8 |
| New York | 8 |
| Turkey | 8 |
| Germany | 7 |
| Canada | 6 |
| Florida | 6 |
| Japan | 6 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedChen, Ssu-Kuang; Hou, Liling; Dodd, Barbara G. – Educational and Psychological Measurement, 1998
A simulation study was conducted to investigate the application of expected a posteriori (EAP) trait estimation in computerized adaptive tests (CAT) based on the partial credit model and compare it with maximum likelihood estimation (MLE). Results show the conditions under which EAP and MLE provide relatively accurate estimation in CAT. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedWalker, Cindy M.; Beretvas, S. Natasha; Ackerman, Terry – Applied Measurement in Education, 2001
Conducted a simulation study of differential item functioning (DIF) to compare the power and Type I error rates for two conditions: using an examinee's ability estimate as the conditioning variable with the CATSIB program and either using the regression correction from CATSIB or not. Discusses implications of findings for DIF detection. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Bias
Nandakumar, Ratna; Roussos, Louis – Journal of Educational and Behavioral Statistics, 2004
A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The…
Descriptors: Evaluation, Adaptive Testing, Computer Assisted Testing, Pretesting
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2003
The Hetter and Sympson (1997; 1985) method is a method of probabilistic item-exposure control in computerized adaptive testing. Setting its control parameters to admissible values requires an iterative process of computer simulations that has been found to be time consuming, particularly if the parameters have to be set conditional on a realistic…
Descriptors: Law Schools, Adaptive Testing, Admission (School), Computer Assisted Testing
Hol, A. Michiel; Vorst, Harrie C. M.; Mellenbergh, Gideon J. – Applied Psychological Measurement, 2007
In a randomized experiment (n = 515), a computerized and a computerized adaptive test (CAT) are compared. The item pool consists of 24 polytomous motivation items. Although items are carefully selected, calibration data show that Samejima's graded response model did not fit the data optimally. A simulation study is done to assess possible…
Descriptors: Student Motivation, Simulation, Adaptive Testing, Computer Assisted Testing
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests
Ackerman, Terry A.; Davey, Tim C. – 1991
An adaptive test can usually match or exceed the measurement precision of conventional tests several times its length. This increased efficiency is not without costs, however, as the models underlying adaptive testing make strong assumptions about examinees and items. Most troublesome is the assumption that item pools are unidimensional. Truly…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Equations (Mathematics)
de Gruijter, Dato N. M. – 1988
Many applications of educational testing have a missing data aspect (MDA). This MDA is perhaps most pronounced in item banking, where each examinee responds to a different subtest of items from a large item pool and where both person and item parameter estimates are needed. The Rasch model is emphasized, and its non-parametric counterpart (the…
Descriptors: Adaptive Testing, Educational Testing, Estimation (Mathematics), Foreign Countries
ERIC Clearinghouse on Tests, Measurement, and Evaluation, Princeton, NJ. – 1983
This brief overview notes that an adaptive test differs from standardized achievement tests in that it does not consist of a certain set of items that are administered to a group of examinees. Instead, the test is individualized for each examinee. The items administered to the examinee are selected from a large pool of items on the basis of the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Latent Trait Theory
Patience, Wayne M.; Reckase, Mark D. – 1978
The feasibility of implementing self-paced computerized tailored testing evaluation methods in an undergraduate measurement and evaluation course, and possible differences in achievement levels under a paced versus self-paced testing schedule were investigated. A maximum likelihood tailored testing procedure based on the simple logistic model had…
Descriptors: Academic Achievement, Achievement Tests, Adaptive Testing, Computer Assisted Testing
Peer reviewedMcBride, James R. – Educational Leadership, 1985
Describes a system in which questions tailored to the examinee's capabilities are administered by computer. Enumerates possible benefits of the system, reviews the "state of the art," and predicts potential applications of computerized adaptive testing. (MCG)
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Program Design
Hambleton, Ronald K.; Sireci, Stephen G.; Swaminathan, H.; Xing, Dehui; Rizavi, Saba – 2003
The purposes of this research study were to develop and field test anchor-based judgmental methods for enabling test specialists to estimate item difficulty statistics. The study consisted of three related field tests. In each, researchers worked with six Law School Admission Test (LSAT) test specialists and one or more of the LSAT subtests. The…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Difficulty Level
Rabinowitz, Stanley; Brandt, Tamara – 2001
Computer-based assessment appears to offer the promise of radically improving both how assessments are implemented and the quality of the information they can deliver. However, as many states consider whether to embrace this new technology, serious concerns remain about the fairness of the new systems and the readiness of states (and districts and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Technology, Educational Testing
van der Linden, Wim J. – 1997
In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
van der Linden, Wim J. – 1997
The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple expression in closed form. In addition, it is…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing

Direct link
