Publication Date
| In 2026 | 0 |
| Since 2025 | 29 |
| Since 2022 (last 5 years) | 168 |
| Since 2017 (last 10 years) | 329 |
| Since 2007 (last 20 years) | 613 |
Descriptor
| Computer Assisted Testing | 1057 |
| Test Items | 1057 |
| Adaptive Testing | 448 |
| Test Construction | 385 |
| Item Response Theory | 255 |
| Item Banks | 223 |
| Foreign Countries | 194 |
| Difficulty Level | 166 |
| Test Format | 160 |
| Item Analysis | 158 |
| Simulation | 142 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 24 |
| Practitioners | 20 |
| Teachers | 13 |
| Students | 2 |
| Administrators | 1 |
Location
| Germany | 17 |
| Australia | 13 |
| Japan | 12 |
| Taiwan | 12 |
| Turkey | 12 |
| United Kingdom | 12 |
| China | 11 |
| Oregon | 10 |
| Canada | 9 |
| Netherlands | 9 |
| United States | 9 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 8 |
| Americans with Disabilities… | 1 |
| Head Start | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Sykes, Robert C.; Ito, Kyoko – 1995
Whether the presence of bidimensionality has any effect on the adaptive recalibration of test items was studied through live-data simulation of computer adaptive testing (CAT) forms. The source data were examinee responses to the 298 scored multiple choice items of a licensure examination in a health care profession. Three 75-item part-forms,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Estimation (Mathematics)
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
Items with the highest discrimination parameter values in a logistic item response theory (IRT) model do not necessarily give maximum information. This paper shows which discrimination parameter values (as a function of the guessing parameter and the distance between person ability and item difficulty) give maximum information for the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Halkitis, Perry N.; And Others – 1996
The relationship between test item characteristics and testing time was studied for a computer-administered licensing examination. One objective of the study was to develop a model to predict testing time on the basis of known item characteristics. Response latencies (i.e., the amount of time taken by examinees to read, review, and answer items)…
Descriptors: Computer Assisted Testing, Difficulty Level, Estimation (Mathematics), Licensing Examinations (Professions)
PDF pending restorationMills, Craig N.; Stocking, Martha L. – 1995
Computerized adaptive testing (CAT), while well-grounded in psychometric theory, has had few large-scale applications for high-stakes, secure tests in the past. This is now changing as the cost of computing has declined rapidly. As is always true where theory is translated into practice, many practical issues arise. This paper discusses a number…
Descriptors: Adaptive Testing, Computer Assisted Testing, High Stakes Tests, Item Banks
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
In this study some alternative item selection criteria for adaptive testing are proposed. These criteria take into account the uncertainty of the ability estimates. A general weighted information criterion is suggested of which the usual maximum information criterion and the suggested alternative criteria are special cases. A simulation study was…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Roos, Linda L.; Wise, Steven L.; Finney, Sara J. – 1998
Previous studies have shown that, when administered a self-adapted test, a few examinees will choose item difficulty levels that are not well-matched to their proficiencies, resulting in high standard errors of proficiency estimation. This study investigated whether the previously observed effects of a self-adapted test--lower anxiety and higher…
Descriptors: Adaptive Testing, College Students, Comparative Analysis, Computer Assisted Testing
Patton, Jan; Steffee, John – 1990
This document provides printed instructions for teachers to use with an IBM-compatible microcomputer to construct tests and then have the computer give the tests, grade them, and print the test results. Computerized tests constructed in this way may contain true-false questions, multiple-choice questions, or a combination of both. The questions…
Descriptors: Business Education, Computer Assisted Testing, Computer Software, Computer Uses in Education
Frick, Theodore W. – 1991
Expert systems can be used to aid decisionmaking. A computerized adaptive test is one kind of expert system, although not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. Two versions of EXSPRT were developed, one with random…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Expert Systems
Assessing the Effects of Computer Administration on Scores and Parameter Estimates Using IRT Models.
Sykes, Robert C.; And Others – 1991
To investigate the psychometric feasibility of replacing a paper-and-pencil licensing examination with a computer-administered test, a validity study was conducted. The computer-administered test (Cadm) was a common set of items for all test takers, distinct from computerized adaptive testing, in which test takers receive items appropriate to…
Descriptors: Adults, Certification, Comparative Testing, Computer Assisted Testing
Rocklin, Thomas – 1989
In self-adapted testing, examinees are allowed to choose the difficulty of each item to be presented immediately before attempting it. Previous research has demonstrated that self-adapted testing leads to better performance than do fixed-order tests and is preferred by examinees. The present study examined the strategies that 29 college students…
Descriptors: Adaptive Testing, Attribution Theory, College Students, Computer Assisted Testing
Wise, Lauress L.; And Others – 1989
The effects of item position on item statistics were studied in a large set of data from tests of word knowledge (WK) and arithmetic reasoning (AR). Position effects on item response theory (IRT) parameter estimates and classical item statistics were also investigated. Data were collected as part of a project to refine the Army's Computerized…
Descriptors: Armed Forces, Computer Assisted Testing, Item Analysis, Latent Trait Theory
Rikers, Jos H. A. N. – 1988
The process of writing test items is analyzed, and a blueprint is presented for an authoring system for test item writing to reduce invalidity and to structure the process of item writing. The developmental methodology is introduced, and the first steps in the process are reported. A historical review traces the advances made in the field and the…
Descriptors: Authoring Aids (Programing), Computer Assisted Testing, Foreign Countries, Item Banks
van der Linden, Wim J., Ed. – 1987
Four discussions of test construction based on item response theory (IRT) are presented. The first discussion, "Test Design as Model Building in Mathematical Programming" (T. J. J. M. Theunissen), presents test design as a decision process under certainty. A natural way of modeling this process leads to mathematical programming. General…
Descriptors: Algorithms, Computer Assisted Testing, Decision Making, Foreign Countries
Choppin, Bruce – 1982
The answer-until-correct procedure has made comparatively little impact on the field of educational testing due to the absence of a sound theoretical base for turning the response data into measures. Three new latent trait models are described. They differ in their complexity, though each is designed to yield a single parameter to measure student…
Descriptors: Academic Achievement, Computer Assisted Testing, Computer Programs, Educational Testing
Roid, Gale H.; And Others – 1980
An earlier study was extended and replicated to examine the feasibility of generating multiple-choice test questions by transforming sentences from prose instructional material. In the first study, a computer-based algorithm was used to analyze prose subject matter and to identify high-information words. Sentences containing selected words were…
Descriptors: Algorithms, Computer Assisted Testing, Criterion Referenced Tests, Difficulty Level


