NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 706 to 720 of 1,057 results Save | Export
Rudner, Lawrence – 1998
This digest discusses the advantages and disadvantages of using item banks, and it provides useful information for those who are considering implementing an item banking project in their school districts. The primary advantage of item banking is in test development. Using an item response theory method, such as the Rasch model, items from multiple…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Banks
Davey, Tim; Parshall, Cynthia G. – 1995
Although computerized adaptive tests acquire their efficiency by successively selecting items that provide optimal measurement at each examinee's estimated level of ability, operational testing programs will typically consider additional factors in item selection. In practice, items are generally selected with regard to at least three, often…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Anderson, Paul S. – 1987
A recent innovation in the area of educational measurement is MDT multi-digit testing, a machine-scored near-equivalent to "fill-in-the-blank" testing. The MDT method is based on long lists (or "Answer Banks") that contain up to 1,000 discrete answers, each with a three-digit label. Students taking an MDT multi-digit test mark…
Descriptors: College Students, Computer Assisted Testing, Higher Education, Scoring
Thorndike, Robert L. – 1983
In educational testing, one is concerned to get as much information as possible about a given examinee from each minute of testing time. Maximum information is obtained when the difficulty of each test exercise matches the estimated ability level of the examinee. The goal of adaptive testing is to accomplish this. Adaptive patterns are reviewed…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Latent Trait Theory
Wetzel, C. Douglas; McBride, James R. – 1983
Computer simulation was used to assess the effects of item parameter estimation errors on different item selection strategies used in adaptive and conventional testing. To determine whether these effects reduced the advantages of certain optimal item selection strategies, simulations were repeated in the presence and absence of item parameter…
Descriptors: Adaptive Testing, Computer Assisted Testing, Latent Trait Theory, Occupational Tests
van der Linden, Wim J.; Boekkooi-Timminga, Ellen – 1986
In order to estimate the classical coefficient of test reliability, parallel measurements are needed. H. Gulliksen's matched random subtests method, which is a graphical method for splitting a test into parallel test halves, has practical relevance because it maximizes the alpha coefficient as a lower bound of the classical test reliability…
Descriptors: Algorithms, Computer Assisted Testing, Computer Software, Difficulty Level
Boekkooi-Timminga, Ellen – 1988
A new test construction method based on integer linear programming is described. This method selects optimal tests in small amounts of computer time. The new method, called the Cluster-Based Method, assumes that the items in the bank have been grouped according to their item information curves so that items within a group, or cluster, are…
Descriptors: Computer Assisted Testing, Item Banks, Latent Trait Theory, Mathematical Models
Naccarato, Richard W. – 1988
The current status of banks of test items existing across the United States was determined through a survey conducted between September and December 1987. Item "bank" in this context does not imply that the test items are available in computerized form, but simply that "deposited" test items can be withdrawn for use. Emphasis…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, National Surveys
Luecht, Richard M. – 2003
This paper presents a multistage adaptive testing test development paradigm that promises to handle content balancing and other test development needs, psychometric reliability concerns, and item exposure. The bundled multistage adaptive testing (BMAT) framework is a modification of the computer-adaptive sequential testing framework introduced by…
Descriptors: Adaptive Testing, Computer Assisted Testing, High Stakes Tests, Mastery Tests
Chang, Shu-Ren; Plake, Barbara S.; Ferdous, Abdullah A. – Online Submission, 2005
This study examined the time different ability level examinees spend taking a CAT on demanding items to these examinees. It was also found that high able examinees spend more time on the pretest items, which are not tailored to the examinees' ability level, than do lower ability examinees. Higher able examinees showed persistence with test…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Reaction Time
Patsula, Liane N.; Steffen, Mandred – 1997
One challenge associated with computerized adaptive testing (CAT) is the maintenance of test and item security while allowing for daily testing. An alternative to continually creating new pools containing an independent set of items would be to consider each CAT pool as a sample of items from a larger collection (referred to as a VAT) rather than…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Multiple Choice Tests
Patsula, Liane N.; Pashley, Peter J. – 1997
Many large-scale testing programs routinely pretest new items alongside operational (or scored) items to determine their empirical characteristics. If these pretest items pass certain statistical criteria, they are placed into an operational item pool; otherwise they are edited and re-pretested or simply discarded. In these situations, reliable…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Banks
Hertz, Norman R.; Chinn, Roberta N. – 2003
This study explored the effect of item exposure on two conventional examinations administered as computer-based tests. A principal hypothesis was that item exposure would have little or no effect on average difficulty of the items over the course of an administrative cycle. This hypothesis was tested by exploring conventional item statistics and…
Descriptors: Computer Assisted Testing, Item Banks, Item Response Theory, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Armstrong, R. D.; And Others – Applied Psychological Measurement, 1996
When the network-flow algorithm (NFA) and the average growth approximation algorithm (AGAA) were used for automated test assembly with American College Test and Armed Services Vocational Aptitude Battery item banks, results indicate that reasonable error in item parameters is not harmful for test assembly using NFA or AGAA. (SLD)
Descriptors: Algorithms, Aptitude Tests, College Entrance Examinations, Computer Assisted Testing
Peer reviewed Peer reviewed
Eignor, Daniel R. – Journal of Educational Measurement, 1997
The authors of the "Guidelines," a task force of eight, intend to present an organized list of features to be considered in reporting or evaluating computerized-adaptive assessments. Apart from a few weaknesses, the book is a useful and complete document that will be very helpful to test developers. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Evaluation Methods, Guidelines
Pages: 1  |  ...  |  44  |  45  |  46  |  47  |  48  |  49  |  50  |  51  |  52  |  ...  |  71