Publication Date
| In 2026 | 0 |
| Since 2025 | 29 |
| Since 2022 (last 5 years) | 168 |
| Since 2017 (last 10 years) | 329 |
| Since 2007 (last 20 years) | 613 |
Descriptor
| Computer Assisted Testing | 1057 |
| Test Items | 1057 |
| Adaptive Testing | 448 |
| Test Construction | 385 |
| Item Response Theory | 255 |
| Item Banks | 223 |
| Foreign Countries | 194 |
| Difficulty Level | 166 |
| Test Format | 160 |
| Item Analysis | 158 |
| Simulation | 142 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 24 |
| Practitioners | 20 |
| Teachers | 13 |
| Students | 2 |
| Administrators | 1 |
Location
| Germany | 17 |
| Australia | 13 |
| Japan | 12 |
| Taiwan | 12 |
| Turkey | 12 |
| United Kingdom | 12 |
| China | 11 |
| Oregon | 10 |
| Canada | 9 |
| Netherlands | 9 |
| United States | 9 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 8 |
| Americans with Disabilities… | 1 |
| Head Start | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Plumer, Gilbert E. – 2000
In the context of examining the feasibility and advisability of computerizing the Law School Admission Test (LSAT), a review of current literature was conducted with the following goals: (1) determining the skills that are most important in good legal reasoning according to the literature; (2) determining the extent to which existing LSAT item…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Law Schools
Parshall, Cynthia G.; Kromrey, Jeffrey D.; Harmes, J. Christine; Sentovich, Christina – 2001
Computerized adaptive tests (CATs) are efficient because of their optimal item selection procedures that target maximally informative items at each estimated ability level. However, operational administration of these optimal CATs results in a relatively small subset of items given to examinees too often, while another portion of the item pool is…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Cole, Rebecca Pollard; MacIsaac, Dan; Cole, David M. – 2001
The purpose of this study (1,313 college student participants) was to examine the differences in paper-based and Web-based administrations of a commonly used assessment instrument, the Force Concept Inventory (FCI) (D. Hestenes, M. Wells, and G. Swackhamer, 1992). Results demonstrated no appreciable difference on FCI scores or FCI items based on…
Descriptors: College Students, Computer Assisted Testing, Higher Education, Physics
McLeod, Lori D.; Schnipke, Deborah L. – 1999
Because scores on high-stakes tests influence many decisions, tests need to be secure. Decisions based on scores affected by preknowledge of items are unacceptable. New methods are needed to detect the new cheating strategies used for computer-administered tests because item pools are typically used over time, providing the potential opportunity…
Descriptors: Adaptive Testing, Cheating, Computer Assisted Testing, High Stakes Tests
Peer reviewedSchnipke, Deborah L.; Scrams, David J. – Journal of Educational Measurement, 1997
A method to measure speededness on tests is presented that reflects the tendency of examinees to guess rapidly on items as time expires. The method models response times with a two-state mixture model, as demonstrated with data from a computer-administered reasoning test taken by 7,218 examinees. (SLD)
Descriptors: Adults, Computer Assisted Testing, Guessing (Tests), Item Response Theory
Peer reviewedSearls, Donald T.; And Others – Journal of Experimental Education, 1990
Indices that detail aspects of student test responses include overall aberrancy; tendencies to miss relatively easy items; tendencies to correctly answer more difficult items; and a combination that indicates how the latter tendencies balance each other. Mathematics test results for 368 college students illustrate the indices. (SLD)
Descriptors: College Students, Computer Assisted Testing, Higher Education, Response Style (Tests)
Peer reviewedStocking, Martha L.; And Others – Applied Psychological Measurement, 1993
A method of automatically selecting items for inclusion in a test with constraints on item content and statistical properties was applied to real data. Tests constructed manually from the same data and constraints were compared to tests constructed automatically. Results show areas in which automated assembly can improve test construction. (SLD)
Descriptors: Algorithms, Automation, Comparative Testing, Computer Assisted Testing
Davis, Laurie Laughlin – Applied Psychological Measurement, 2004
Choosing a strategy for controlling item exposure has become an integral part of test development for computerized adaptive testing (CAT). This study investigated the performance of six procedures for controlling item exposure in a series of simulated CATs under the generalized partial credit model. In addition to a no-exposure control baseline…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Achievement Tests
Huitzing, Hiddo A. – Applied Psychological Measurement, 2004
This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…
Descriptors: Mathematical Applications, Simulation, Item Sampling, Item Response Theory
Wiberg, Marie – Journal of Educational and Behavioral Statistics, 2003
A criterion-referenced computerized test is expressed as a statistical hypothesis problem. This admits that it can be studied by using the theory of optimal design. The power function of the statistical test is used as a criterion function when designing the test. A formal proof is provided showing that all items should have the same item…
Descriptors: Test Items, Computer Assisted Testing, Statistics, Validity
Threlfall, John; Pool, Peter; Homer, Matthew; Swinnerton, Bronwen – Educational Studies in Mathematics, 2007
This article explores the effect on assessment of "translating" paper and pencil test items into their computer equivalents. Computer versions of a set of mathematics questions derived from the paper-based end of key stage 2 and 3 assessments in England were administered to age appropriate pupil samples, and the outcomes compared.…
Descriptors: Test Items, Student Evaluation, Foreign Countries, Test Validity
Brosvic, Gary M.; Epstein, Michael L.; Dihoff, Roberta E.; Cook, Michael L. – Psychological Record, 2006
The present studies were undertaken to examine the effects of manipulating delay-interval task (Study 1) and timing of feedback (Study 2) on acquisition and retention. Participants completed a 100-item cumulative final examination, which included 50 items from each laboratory examination, plus 50 entirely new items. Acquisition and retention were…
Descriptors: Individual Testing, Multiple Choice Tests, Feedback, Test Items
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing
Stocking, Martha L. – 1988
Recent advances in psychometrics and computer technology encourage the development of model-based methods of individualized testing on a microcomputer, where each examinee receives short tests and where the number of pretest items that can be administered is severely restricted. On-line (i.e., data is collected on operational equipment) methods…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory
Stocking, Martha L.; Lewis, Charles – 1995
In the periodic testing environment associated with conventional paper-and-pencil tests, the frequency with which items are seen by test-takers is tightly controlled in advance of testing by policies that regulate both the reuse of test forms and the frequency with which candidates may take the test. In the continuous testing environment…
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Test Construction

Direct link
