Publication Date
| In 2026 | 0 |
| Since 2025 | 29 |
| Since 2022 (last 5 years) | 168 |
| Since 2017 (last 10 years) | 329 |
| Since 2007 (last 20 years) | 613 |
Descriptor
| Computer Assisted Testing | 1057 |
| Test Items | 1057 |
| Adaptive Testing | 448 |
| Test Construction | 385 |
| Item Response Theory | 255 |
| Item Banks | 223 |
| Foreign Countries | 194 |
| Difficulty Level | 166 |
| Test Format | 160 |
| Item Analysis | 158 |
| Simulation | 142 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 24 |
| Practitioners | 20 |
| Teachers | 13 |
| Students | 2 |
| Administrators | 1 |
Location
| Germany | 17 |
| Australia | 13 |
| Japan | 12 |
| Taiwan | 12 |
| Turkey | 12 |
| United Kingdom | 12 |
| China | 11 |
| Oregon | 10 |
| Canada | 9 |
| Netherlands | 9 |
| United States | 9 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 8 |
| Americans with Disabilities… | 1 |
| Head Start | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
The Comparability of the Statistical Characteristics of Test Items Generated by Computer Algorithms.
Meisner, Richard; And Others – 1993
This paper presents a study on the generation of mathematics test items using algorithmic methods. The history of this approach is briefly reviewed and is followed by a survey of the research to date on the statistical parallelism of algorithmically generated mathematics items. Results are presented for 8 parallel test forms generated using 16…
Descriptors: Algorithms, Comparative Analysis, Computer Assisted Testing, Item Banks
Nitko, Anthony J.; Hsu, Tse-chi – 1984
Item analysis procedures appropriate for domain-referenced classroom testing are described. A conceptual framework within which item statistics can be considered and promising statistics in light of this framework are presented. The sampling fluctuations of the more promising item statistics for sample sizes comparable to the typical classroom…
Descriptors: Computer Assisted Testing, Criterion Referenced Tests, Item Analysis, Microcomputers
Hiscox, Michael D. – 1983
Educational item banking presents observers with a considerable paradox. The development of test items from scratch is viewed as wasteful, a luxury in times of declining resources. On the other hand, item banking has failed to become a mature technology despite large amounts of money and the efforts of talented professionals. The question of which…
Descriptors: Computer Assisted Testing, Cost Effectiveness, Cost Estimates, Educational Testing
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – ETS Research Report Series, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Test Items, Computer Assisted Testing, Computation, Adaptive Testing
Edwards, Ethan A. – 1990
Testing 1-2-3 is a general purpose testing system developed at the Computer-Based Education Research Laboratory at the University of Illinois for use on NovaNET computer-based education systems. The testing system can be used for: short, teacher-made quizzes, individualized examinations, computer managed instruction curriculum testing,…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Scores, Teacher Made Tests
Samejima, Fumiko – 1990
Test validity is a concept that has often been ignored in the context of latent trait models and in modern test theory, particularly as it relates to computerized adaptive testing. Some considerations about the validity of a test and of a single item are proposed. This paper focuses on measures that are population-free and that will provide local…
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Item Response Theory
Sander, Angelle M.; And Others – 1988
The effects of presenting test items in random order or in a sequence parallel to the order of presentation were studied by testing 92 undergraduates in an introductory psychology course at Emporia State University (Kansas). Two test forms, sequential (S) and random (R), of multiple-choice questions were prepared for four 1-hour examinations…
Descriptors: Computer Assisted Testing, Higher Education, Item Banks, Multiple Choice Tests
Sarvela, Paul D.; Noonan, John V. – Educational Technology, 1988
Describes measurement problems associated with computer based testing (CBT) programs when they are part of a computer assisted instruction curriculum. Topics discussed include CBT standards; selection of item types; the contamination of items that arise from test design strategies; and the non-equivalence of comparison groups in item analyses. (8…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Item Analysis, Psychometrics
Glas, Cees A. W.; van der Linden, Wim J. – 2001
To reduce the cost of item writing and to enhance the flexibility of item presentation, items can be generated by item-cloning techniques. An important consequence of cloning is that it may cause variability on the item parameters. Therefore, a multilevel item response model is presented in which it is assumed that the item parameters of a…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Costs
Thomasson, Gary L. – 1997
Score comparability is important to those who take tests and those who use them. One important concept related to test score comparability is that of "equity," which is defined as existing when examinees are indifferent as to which of two alternate forms of a test they would prefer to take. By their nature, computerized adaptive tests…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Peer reviewedStone, Clement A. – Educational Measurement: Issues and Practice, 1989
MicroCAT version 3.0--an integrated test development, administration, and analysis system--is reviewed in this first article of a series on testing software. A framework for comparing testing software is presented. The strength of this package lies in the development, banking, and administration of items composed of text and graphics. (SLD)
Descriptors: Computer Assisted Testing, Computer Software, Computer Software Reviews, Data Analysis
Peer reviewedSamejima, Fumiko – Psychometrika, 1994
Using the constant information model, constant amounts of test information, and a finite interval of ability, simulated data were produced for 8 ability levels and 20 numbers of test items. Analyses suggest that it is desirable to consider modifying test information functions when they measure accuracy in ability estimation. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Peer reviewedWainer, Howard – Educational Measurement: Issues and Practice, 1993
Some cautions are sounded for converting a linearly administered test to an adaptive format. Four areas are identified in which practices broadly used in traditionally constructed tests can have adverse effects if thoughtlessly adopted when a test is administered in an adaptive mode. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Practices, Test Construction
Peer reviewedFry, D. J. – Computers and Education, 1990
Discussion of computer-based testing (CBT) focuses on the development of the database format question (DFQ) which allows the user to build up a more complex answer than multiple choice by selecting items from a database presented on the computer screen. Programing is explained and an evaluation of DFQ is included. (Seven references) (LRW)
Descriptors: Computer Assisted Testing, Databases, Evaluation Methods, Higher Education
Bridgeman, Brent; Cline, Frederick – Journal of Educational Measurement, 2004
Time limits on some computer-adaptive tests (CATs) are such that many examinees have difficulty finishing, and some examinees may be administered tests with more time-consuming items than others. Results from over 100,000 examinees suggested that about half of the examinees must guess on the final six questions of the analytical section of the…
Descriptors: Guessing (Tests), Timed Tests, Adaptive Testing, Computer Assisted Testing

Direct link
