NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 29 results Save | Export
Lau, C. Allen; Wang, Tianyou – 1999
A study was conducted to extend the sequential probability ratio testing (SPRT) procedure with the polytomous model under some practical constraints in computerized classification testing (CCT), such as methods to control item exposure rate, and to study the effects of other variables, including item information algorithms, test difficulties, item…
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
Schnipke, Deborah L. – 2002
A common practice in some certification fields (e.g., information technology) is to draw items from an item pool randomly and apply a common passing score, regardless of the items administered. Because these tests are commonly used, it is important to determine how accurate the pass/fail decisions are for such tests and whether fairly small,…
Descriptors: Decision Making, Difficulty Level, Item Banks, Pass Fail Grading
Zhu, Daming; Fan, Meichu – 1999
The convention for selecting starting points (that is, initial items) on a computerized adaptive test (CAT) is to choose as starting points items of medium difficulty for all examinees. Selecting a starting point based on prior information about an individual's ability was first suggested many years ago, but has been believed unimportant provided…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Linacre, John Michael – 1988
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Cutting Scores
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing
Haladyna, Thomas M.; And Others – 1987
This paper discusses the development and use of "item shells" in constructing multiple-choice tests. An item shell is a "hollow" item that contains the syntactic structure and context of an item without specific content. Item shells are empirically developed from successfully used items selected from an existing item pool. Use…
Descriptors: Difficulty Level, Health Personnel, Item Banks, Multiple Choice Tests
Peer reviewed Peer reviewed
Prien, Borge – Studies in Educational Evaluation, 1989
Under certain conditions it may be possible to determine the difficulty of previously untested test items. Although no recipe can be provided, reflections on this topic are presented, drawing on concepts of item banking. A functional constructive method is suggested as having the most potential. (SLD)
Descriptors: Difficulty Level, Educational Assessment, Foreign Countries, Item Analysis
Lutkus, Anthony D.; Laskaris, George – 1981
Analyses of student responses to Introductory Psychology test questions were discussed. The publisher supplied a two thousand item test bank on computer tape. Instructors selected questions for fifteen item tests. The test questions were labeled by the publisher as factual or conceptual. The semester course used a mastery learning format in which…
Descriptors: Difficulty Level, Higher Education, Item Analysis, Item Banks
George, Archie A. – 1979
The appropriateness of the use of the standardized residual (SR) to assess congruence between sample test item responses and the one parameter latent trait (Rasch) item characteristic curve is investigated. Latent trait theory is reviewed, as well as theory of the SR, the apparent error in calculating the expected distribution of the SR, and…
Descriptors: Academic Ability, Computer Programs, Difficulty Level, Goodness of Fit
Dodds, Jeffrey – 1999
Basic precepts for test development are described and explained as they are presented in measurement textbooks commonly used in the fields of education and psychology. The five building blocks discussed as the foundation of well-constructed tests are: (1) specification of purpose; (2) standard conditions; (3) consistency; (4) validity; and (5)…
Descriptors: Difficulty Level, Educational Research, Grading, Higher Education
Tollefson, Nona; Tripp, Alice – 1983
This study compared the item difficulty and item discrimination of three multiple choice item formats. The multiple choice formats studied were: a complex alternative (none of the above) as the correct answer; a complex alternative as a foil, and the one-correct answer format. One hundred four graduate students were randomly assigned to complete…
Descriptors: Analysis of Variance, Difficulty Level, Graduate Students, Higher Education
Gershon, Richard C.; And Others – 1994
A 1992 study by R. Gershon found discrepancies when comparing the theoretical Rasch item characteristic curve with the average empirical curve for 1,304 vocabulary items administered to 7,711 students. When person-item mismatches were deleted (for any person-item interaction where the ability of the person was much higher or much lower than the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education
Wise, Steven L.; And Others – 1991
According to item response theory (IRT), examinee ability estimation is independent of the particular set of test items administered from a calibrated pool. Although the most popular application of this feature of IRT is computerized adaptive (CA) testing, a recently proposed alternative is self-adapted (SA) testing, in which examinees choose the…
Descriptors: Ability Identification, Adaptive Testing, College Students, Comparative Testing
Drasgow, Fritz; Parsons, Charles K. – 1982
The effects of a multidimensional latent trait space on estimation of item and person parameters by the computer program LOGIST are examined. Several item pools were simulated that ranged from truly unidimensional to an inconsequential general latent trait. Item pools with intermediate levels of prepotency of the general latent trait were also…
Descriptors: Computer Simulation, Computer Software, Difficulty Level, Item Analysis
Byars, Alvin Gregg – 1980
The objectives of this investigation are to develop, describe, assess, and demonstrate procedures for constructing mastery tests to minimize errors of classification and to maximize decision reliability. The guidelines are based on conditions where item exchangeability is a reasonable assumption and the test constructor can control the number of…
Descriptors: Cutting Scores, Difficulty Level, Grade 4, Intermediate Grades
Previous Page | Next Page ยป
Pages: 1  |  2