Publication Date
| In 2026 | 0 |
| Since 2025 | 25 |
| Since 2022 (last 5 years) | 121 |
| Since 2017 (last 10 years) | 250 |
| Since 2007 (last 20 years) | 576 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 38 |
| Practitioners | 25 |
| Teachers | 8 |
| Administrators | 6 |
| Counselors | 3 |
| Policymakers | 2 |
| Parents | 1 |
| Students | 1 |
Location
| Taiwan | 12 |
| United Kingdom | 10 |
| Australia | 9 |
| Netherlands | 9 |
| California | 8 |
| New York | 8 |
| Turkey | 8 |
| Germany | 7 |
| Canada | 6 |
| Florida | 6 |
| Japan | 6 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2003
The Hetter and Sympson (1997; 1985) method is a method of probabilistic item-exposure control in computerized adaptive testing. Setting its control parameters to admissible values requires an iterative process of computer simulations that has been found to be time consuming, particularly if the parameters have to be set conditional on a realistic…
Descriptors: Law Schools, Adaptive Testing, Admission (School), Computer Assisted Testing
Kaburlasos, Vassilis G.; Marinagi, Catherine C.; Tsoukalas, Vassilis Th. – Computers & Education, 2008
This work presents innovative cybernetics (feedback) techniques based on Bayesian statistics for drawing questions from an Item Bank towards personalized multi-student improvement. A novel software tool, namely "Module for Adaptive Assessment of Students" (or, "MAAS" for short), implements the proposed (feedback) techniques. In conclusion, a pilot…
Descriptors: Feedback (Response), Student Improvement, Computer Science, Bayesian Statistics
Doherty, R. William; Hilberg, R. Soleste – Journal of Educational Research, 2008
The authors reported findings from 3 studies examining the efficacy of Five Standards pedagogy in raising student achievement. Studies 1 and 2 were randomized designs; Study 3 was a quasi-experimental design. Samples included 53 teachers and 622 predominantly low-income Latino students in Grades 1-4. Studies assessed model fidelity with the…
Descriptors: Quasiexperimental Design, Adaptive Testing, Academic Achievement, Second Language Learning
Daro, Phil; Stancavage, Frances; Ortega, Moreica; DeStefano, Lizanne; Linn, Robert – American Institutes for Research, 2007
In Spring 2006,. the NAEP Validity Studies (NVS) Panel was asked by the National Center for Education Statistics (NCES) to undertake a validity study to examine the quality of the NAEP Mathematics Assessments at grades 4 and 8. Specifically, NCES asked the NVS Panel to address five questions: (1) Does the NAEP framework offer reasonable content…
Descriptors: National Competency Tests, Mathematics Achievement, Adaptive Testing, Quality Control
Hol, A. Michiel; Vorst, Harrie C. M.; Mellenbergh, Gideon J. – Applied Psychological Measurement, 2007
In a randomized experiment (n = 515), a computerized and a computerized adaptive test (CAT) are compared. The item pool consists of 24 polytomous motivation items. Although items are carefully selected, calibration data show that Samejima's graded response model did not fit the data optimally. A simulation study is done to assess possible…
Descriptors: Student Motivation, Simulation, Adaptive Testing, Computer Assisted Testing
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests
Ackerman, Terry A.; Davey, Tim C. – 1991
An adaptive test can usually match or exceed the measurement precision of conventional tests several times its length. This increased efficiency is not without costs, however, as the models underlying adaptive testing make strong assumptions about examinees and items. Most troublesome is the assumption that item pools are unidimensional. Truly…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Equations (Mathematics)
de Gruijter, Dato N. M. – 1988
Many applications of educational testing have a missing data aspect (MDA). This MDA is perhaps most pronounced in item banking, where each examinee responds to a different subtest of items from a large item pool and where both person and item parameter estimates are needed. The Rasch model is emphasized, and its non-parametric counterpart (the…
Descriptors: Adaptive Testing, Educational Testing, Estimation (Mathematics), Foreign Countries
ERIC Clearinghouse on Tests, Measurement, and Evaluation, Princeton, NJ. – 1983
This brief overview notes that an adaptive test differs from standardized achievement tests in that it does not consist of a certain set of items that are administered to a group of examinees. Instead, the test is individualized for each examinee. The items administered to the examinee are selected from a large pool of items on the basis of the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Latent Trait Theory
Patience, Wayne M.; Reckase, Mark D. – 1978
The feasibility of implementing self-paced computerized tailored testing evaluation methods in an undergraduate measurement and evaluation course, and possible differences in achievement levels under a paced versus self-paced testing schedule were investigated. A maximum likelihood tailored testing procedure based on the simple logistic model had…
Descriptors: Academic Achievement, Achievement Tests, Adaptive Testing, Computer Assisted Testing
Peer reviewedMcBride, James R. – Educational Leadership, 1985
Describes a system in which questions tailored to the examinee's capabilities are administered by computer. Enumerates possible benefits of the system, reviews the "state of the art," and predicts potential applications of computerized adaptive testing. (MCG)
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Program Design
Hambleton, Ronald K.; Sireci, Stephen G.; Swaminathan, H.; Xing, Dehui; Rizavi, Saba – 2003
The purposes of this research study were to develop and field test anchor-based judgmental methods for enabling test specialists to estimate item difficulty statistics. The study consisted of three related field tests. In each, researchers worked with six Law School Admission Test (LSAT) test specialists and one or more of the LSAT subtests. The…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Difficulty Level
Rabinowitz, Stanley; Brandt, Tamara – 2001
Computer-based assessment appears to offer the promise of radically improving both how assessments are implemented and the quality of the information they can deliver. However, as many states consider whether to embrace this new technology, serious concerns remain about the fairness of the new systems and the readiness of states (and districts and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Technology, Educational Testing
van der Linden, Wim J. – 1997
In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
van der Linden, Wim J. – 1997
The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple expression in closed form. In addition, it is…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing

Direct link
