NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Tollefson, Nona; Tripp, Alice – 1983
This study compared the item difficulty and item discrimination of three multiple choice item formats. The multiple choice formats studied were: a complex alternative (none of the above) as the correct answer; a complex alternative as a foil, and the one-correct answer format. One hundred four graduate students were randomly assigned to complete…
Descriptors: Analysis of Variance, Difficulty Level, Graduate Students, Higher Education
Nissan, Susan; And Others – 1996
One of the item types in the Listening Comprehension section of the Test of English as a Foreign Language (TOEFL) test is the dialogue. Because the dialogue item pool needs to have an appropriate balance of items at a range of difficulty levels, test developers have examined items at various difficulty levels in an attempt to identify their…
Descriptors: Classification, Dialogs (Language), Difficulty Level, English (Second Language)
Wainer, Howard; Thissen, David – 1994
When an examination consists in whole or part of constructed response test items, it is common practice to allow the examinee to choose a subset of the constructed response questions from a larger pool. It is sometimes argued that, if choice were not allowed, the limitations on domain coverage forced by the small number of items might unfairly…
Descriptors: Constructed Response, Difficulty Level, Educational Testing, Equated Scores
Roid, Gale H.; Wendler, Cathy L. W. – 1983
The development of the emerging technology of item writing was motivated in part by the desire to reduce potential subjectivity and bias between different item writers who attempt to construct parallel achievement tests. The present study contrasts four test forms constructed by the combined efforts of six item writers using four methods of item…
Descriptors: Achievement Tests, Difficulty Level, Intermediate Grades, Item Analysis
Rachor, Robert E.; Gray, George T. – 1996
Two frequently cited guidelines for writing multiple choice test item stems are: (1) the stem can be written in either a question or statement-to-be-completed format; and (2) only positively worded stems should be used. These guidelines were evaluated in a survey of the test item banks of 13 nationally administered examinations in the physician…
Descriptors: Allied Health Personnel, Difficulty Level, High Achievement, Item Banks
Linacre, John M. – 1987
This paper describes a computer program in Microsoft BASIC which selects and administers test items from a small item bank. The level of the difficulty of the item selected depends on the test taker's previous response. This adaptive system is based on the Rasch model. The Rasch model uses a unit of measurement based on the logarithm of the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Individual Testing
van Roosmalen, Willem M. M. – 1983
The construction of objective tests for native language reading comprehension is described. The tests were designed for the early secondary school years in several kinds of schools, vocational and non-vocational. The description focuses on the use of the Rasch model in test development, to develop a large pool of homogenous items and establish…
Descriptors: Ability Grouping, Difficulty Level, Foreign Countries, Item Banks
Wainer, Howard; Kiely, Gerard L. – 1986
Recent experience with the Computerized Adaptive Test (CAT) has raised a number of concerns about its practical applications. The concerns are principally involved with the concept of having the computer construct the test from a precalibrated item pool, and substituting statistical characteristics for the test developer's skills. Problems with…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Construct Validity
Maihoff, N. A.; Mehrens, Wm. A. – 1985
A comparison is presented of alternate-choice and true-false item forms used in an undergraduate natural science course. The alternate-choice item is a modified two-choice multiple-choice item in which the two responses are included within the question stem. This study (1) compared the difficulty level, discrimination level, reliability, and…
Descriptors: Classroom Environment, College Freshmen, Comparative Analysis, Comparative Testing