NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 46 to 60 of 115 results Save | Export
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing
Rudner, Lawrence – 1998
This digest discusses the advantages and disadvantages of using item banks, and it provides useful information for those who are considering implementing an item banking project in their school districts. The primary advantage of item banking is in test development. Using an item response theory method, such as the Rasch model, items from multiple…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Banks
Thorndike, Robert L. – 1983
In educational testing, one is concerned to get as much information as possible about a given examinee from each minute of testing time. Maximum information is obtained when the difficulty of each test exercise matches the estimated ability level of the examinee. The goal of adaptive testing is to accomplish this. Adaptive patterns are reviewed…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Latent Trait Theory
Peer reviewed Peer reviewed
Wise, Steven L.; Finney, Sara J.; Enders, Craig K.; Freeman, Sharon A.; Severance, Donald D. – Applied Measurement in Education, 1999
Examined whether providing item review on a computerized adaptive test could be used by examinees to inflate their scores. Two studies involving 139 undergraduates suggest that examinees are not highly proficient at discriminating item difficulty. A simulation study showed the usefulness of a strategy identified by G. Kingsbury (1996) as a way to…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Deak, Gedeon O.; Ray, Shanna D.; Pick, Anne D. – Cognitive Development, 2004
To test preschoolers' ability to flexibly switch between abstract rules differing in difficulty, ninety-three 3-, 4-, and 5-year-olds were instructed to switch from an (easier) shape-sorting to a (harder) function-sorting rule, or vice versa. Children learned one rule, sorted four test sets, then learned the other rule, and sorted four more sets.…
Descriptors: Difficulty Level, Preschool Children, Cognitive Tests, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Vispoel, Walter P.; Clough, Sara J.; Bleiler, Timothy – Journal of Educational Measurement, 2005
Recent studies have shown that restricting review and answer change opportunities on computerized adaptive tests (CATs) to items within successive blocks reduces time spent in review, satisfies most examinees' desires for review, and controls against distortion in proficiency estimates resulting from intentional incorrect answering of items prior…
Descriptors: Mathematics, Item Analysis, Adaptive Testing, Computer Assisted Testing
Loyd, Brenda H. – 1984
One form of adaptive testing involves a two-stage procedure. The first stage is the administration of a routing test. From this first test, an estimate of an examinee's ability is obtained. On the basis of this ability estimate, a second test focused on a given ability level is administered. The purpose of this study was to compare the efficiency…
Descriptors: Academic Ability, Adaptive Testing, Difficulty Level, Elementary Education
Peer reviewed Peer reviewed
Rocklin, Thomas; O'Donnell, Angela M. – Journal of Educational Psychology, 1987
An experiment was conducted that contrasted a variant of computerized adaptive testing, self-adapted testing, with two traditional tests. Participants completed a self-report of text anxiety and were randomly assigned to take one of the three tests of verbal ability. Subjects generally chose more difficult items as the test progressed. (Author/LMO)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Wang, Xiang-bo; And Others – Applied Measurement in Education, 1995
An experiment is reported in which 225 high school students were asked to choose among several multiple-choice items but then were required to answer them all. It is concluded that allowing choice while having fair tests is only possible when choice is irrelevant in terms of difficulty. (SLD)
Descriptors: Adaptive Testing, Difficulty Level, Equated Scores, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Eggen, Theo J. H. M.; Verschoor, Angela J. – Applied Psychological Measurement, 2006
Computerized adaptive tests (CATs) are individualized tests that, from a measurement point of view, are optimal for each individual, possibly under some practical conditions. In the present study, it is shown that maximum information item selection in CATs using an item bank that is calibrated with the one- or the two-parameter logistic model…
Descriptors: Adaptive Testing, Difficulty Level, Test Items, Item Response Theory
Stocking, Martha L. – 1994
Modern applications of computerized adaptive testing (CAT) are typically grounded in item response theory (IRT; Lord, 1980). While the IRT foundations of adaptive testing provide a number of approaches to adaptive test scoring that may seem natural and efficient to psychometricians, these approaches may be more demanding for test takers, test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Cline, Frederick – Journal of Educational Measurement, 2004
Time limits on some computer-adaptive tests (CATs) are such that many examinees have difficulty finishing, and some examinees may be administered tests with more time-consuming items than others. Results from over 100,000 examinees suggested that about half of the examinees must guess on the final six questions of the analytical section of the…
Descriptors: Guessing (Tests), Timed Tests, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ariel, Adelaide; Veldkamp, Bernard P.; Breithaupt, Krista – Applied Psychological Measurement, 2006
Computerized multistage testing (MST) designs require sets of test questions (testlets) to be assembled to meet strict, often competing criteria. Rules that govern testlet assembly may dictate the number of questions on a particular subject or may describe desirable statistical properties for the test, such as measurement precision. In an MST…
Descriptors: Item Response Theory, Item Banks, Psychometrics, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Revuelta, Javier – Journal of Educational and Behavioral Statistics, 2004
This article presents a psychometric model for estimating ability and item-selection strategies in self-adapted testing. In contrast to computer adaptive testing, in self-adapted testing the examinees are allowed to select the difficulty of the items. The item-selection strategy is defined as the distribution of difficulty conditional on the…
Descriptors: Psychometrics, Adaptive Testing, Test Items, Evaluation Methods
Wise, Steven L.; And Others – 1997
The degree to which item review on a computerized adaptive test (CAT) could be used by examinees to inflate their scores artificially was studied. G. G. Kingsbury (1996) described a strategy in which examinees could use the changes in item difficulty during a CAT to determine which items' answers are incorrect and should be changed during item…
Descriptors: Achievement Gains, Adaptive Testing, College Students, Computer Assisted Testing
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8