Publication Date
In 2025 | 10 |
Since 2024 | 47 |
Since 2021 (last 5 years) | 127 |
Since 2016 (last 10 years) | 261 |
Since 2006 (last 20 years) | 582 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Researchers | 38 |
Practitioners | 25 |
Teachers | 8 |
Administrators | 6 |
Counselors | 3 |
Parents | 1 |
Policymakers | 1 |
Students | 1 |
Location
Taiwan | 12 |
United Kingdom | 10 |
Netherlands | 9 |
California | 8 |
Turkey | 8 |
Australia | 7 |
Germany | 7 |
New York | 7 |
Canada | 6 |
Florida | 6 |
Japan | 6 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

McKinley, Robert L.; Reckase, Mark D. – AEDS Journal, 1980
Describes tailored testing (in which a computer selects appropriate items from an item bank while an examinee is taking a test) and shows it to be superior to paper-and-pencil tests in such areas as reliability, security, and appropriateness of items. (IRT)
Descriptors: Adaptive Testing, Computer Assisted Testing, Higher Education, Program Evaluation

Glas, Cees A. W.; van der Linden, Wim J. – Applied Psychological Measurement, 2003
Developed a multilevel item response (IRT) model that allows for differences between the distributions of item parameters of families of item clones. Results from simulation studies based on an item pool from the Law School Admission Test illustrate the accuracy of the item pool calibration and adaptive testing procedures based on the model. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory

Schnipke, Deborah L.; Green, Bert F. – Journal of Educational Measurement, 1995
Two item selection algorithms, one based on maximal differentiation between examinees and one based on item response theory and maximum information for each examinee, were compared in simulated linear and adaptive tests of cognitive ability. Adaptive tests based on maximum information were clearly superior. (SLD)
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Item Response Theory

Laatsch, Linda; Choca, James – Psychological Assessment, 1994
The authors propose using cluster analysis to develop a branching logic that would allow the adaptive administration of psychological instruments. The proposed methodology is described in detail and used to develop an adaptive version of the Halstead Category Test from archival data. (SLD)
Descriptors: Adaptive Testing, Cluster Analysis, Computer Assisted Testing, Psychological Testing

van der Linden, Wim J. – Psychometrika, 1998
This paper suggests several item selection criteria for adaptive testing that are all based on the use of the true posterior. Some of the ability estimators produced by these criteria are discussed and empirically criticized. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing

May, Kim; Nicewander, W. Alan – Educational and Psychological Measurement, 1998
The degree to which scale distortion in the ordinary difference score can be removed by using differences based on estimated examinee proficiency (theta) in either conventional or adaptive testing situations was studied using Item Response Theory. Using estimated thetas removed much scale distortion for both conventional and adaptive tests. (SLD)
Descriptors: Ability, Achievement Gains, Adaptive Testing, Estimation (Mathematics)

Neuman, George; Baydoun, Ramzi – Applied Psychological Measurement, 1998
Studied the cross-mode equivalence of paper-and-pencil and computer-based clerical tests with 141 undergraduates. Found no differences across modes for the two types of tests. Differences can be minimized when speeded computerized tests follow the same administration and response procedures as the paper format. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Higher Education

Vispoel, Walter P. – Journal of Educational Measurement, 1998
Compared results from computer-adaptive and self-adaptive tests under conditions in which item review was and was not permitted for 379 college students. Results suggest that, when given the opportunity, most examinees will change answers, but usually only to a small portion of items, resulting in some benefit to the test taker. (SLD)
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Higher Education

Chen, Ssu-Kuang; Hou, Liling; Dodd, Barbara G. – Educational and Psychological Measurement, 1998
A simulation study was conducted to investigate the application of expected a posteriori (EAP) trait estimation in computerized adaptive tests (CAT) based on the partial credit model and compare it with maximum likelihood estimation (MLE). Results show the conditions under which EAP and MLE provide relatively accurate estimation in CAT. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)

Walker, Cindy M.; Beretvas, S. Natasha; Ackerman, Terry – Applied Measurement in Education, 2001
Conducted a simulation study of differential item functioning (DIF) to compare the power and Type I error rates for two conditions: using an examinee's ability estimate as the conditioning variable with the CATSIB program and either using the regression correction from CATSIB or not. Discusses implications of findings for DIF detection. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Bias
Nandakumar, Ratna; Roussos, Louis – Journal of Educational and Behavioral Statistics, 2004
A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The…
Descriptors: Evaluation, Adaptive Testing, Computer Assisted Testing, Pretesting
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2003
The Hetter and Sympson (1997; 1985) method is a method of probabilistic item-exposure control in computerized adaptive testing. Setting its control parameters to admissible values requires an iterative process of computer simulations that has been found to be time consuming, particularly if the parameters have to be set conditional on a realistic…
Descriptors: Law Schools, Adaptive Testing, Admission (School), Computer Assisted Testing
Chang, Wen-Chih; Yang, Hsuan-Che; Shih, Timothy K.; Chao, Louis R. – International Journal of Distance Education Technologies, 2009
E-learning provides a convenient and efficient way for learning. Formative assessment not only guides student in instruction and learning, diagnose skill or knowledge gaps, but also measures progress and evaluation. An efficient and convenient e-learning formative assessment system is the key character for e-learning. However, most e-learning…
Descriptors: Electronic Learning, Student Evaluation, Formative Evaluation, Educational Objectives
Tseng, Shian-Shyong; Sue, Pei-Chi; Su, Jun-Ming; Weng, Jui-Feng; Tsai, Wen-Nung – Computers & Education, 2007
In recent years, e-learning system has become more and more popular and many adaptive learning environments have been proposed to offer learners customized courses in accordance with their aptitudes and learning results. For achieving the adaptive learning, a predefined concept map of a course is often used to provide adaptive learning guidance…
Descriptors: Concept Mapping, Educational Technology, Virtual Classrooms, Junior High Schools
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests