NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Haladyna, Thomas M.; Rodriguez, Michael C. – Educational Assessment, 2021
Full-information item analysis provides item developers and reviewers comprehensive empirical evidence of item quality, including option response frequency, point-biserial index (PBI) for distractors, mean-scores of respondents selecting each option, and option trace lines. The multi-serial index (MSI) is introduced as a more informative…
Descriptors: Test Items, Item Analysis, Reading Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Haladyna, Thomas M.; Rodriguez, Michael C.; Stevens, Craig – Applied Measurement in Education, 2019
The evidence is mounting regarding the guidance to employ more three-option multiple-choice items. From theoretical analyses, empirical results, and practical considerations, such items are of equal or higher quality than four- or five-option items, and more items can be administered to improve content coverage. This study looks at 58 tests,…
Descriptors: Multiple Choice Tests, Test Items, Testing Problems, Guessing (Tests)
Haladyna, Thomas M. – IDEA Center, Inc., 2018
Writing multiple-choice test items to measure student learning in higher education is a challenge. Based on extensive scholarly research and experience, the author describes various item formats, offers guidelines for creating these items, and provides many examples of both good and bad test items. He also suggests some shortcuts for developing…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Higher Education
Peer reviewed Peer reviewed
Haladyna, Thomas M.; Downing, Steven M. – Applied Measurement in Education, 1989
Results of 96 theoretical/empirical studies were reviewed to see if they support a taxonomy of 43 rules for writing multiple-choice test items. The taxonomy is the result of an analysis of 46 textbooks dealing with multiple-choice item writing. For nearly half of the rules, no research was found. (SLD)
Descriptors: Classification, Literature Reviews, Multiple Choice Tests, Test Construction
Haladyna, Thomas M. – 1999
This book explains writing effective multiple-choice test items and studying responses to items to evaluate and improve them, two topics that are very important in the development of many cognitive tests. The chapters are: (1) "Providing a Context for Multiple-Choice Testing"; (2) "Constructed-Response and Multiple-Choice Item Formats"; (3)…
Descriptors: Constructed Response, Multiple Choice Tests, Test Construction, Test Format
Peer reviewed Peer reviewed
Haladyna, Thomas M.; Downing, Steven M.; Rodriguez, Michael C. – Applied Measurement in Education, 2002
Validated a taxonomy of 31 multiple-choice item-writing guidelines through a logical process that included reviewing 27 textbooks on educational testing and the results of 27 studies and reviews published since 1990. Presents the taxonomy, which is intended for classroom assessment. (SLD)
Descriptors: Classification, Literature Reviews, Multiple Choice Tests, Student Evaluation
Peer reviewed Peer reviewed
Crehan, Kevin; Haladyna, Thomas M. – Journal of Experimental Education, 1991
Two item-writing rules were tested: phrasing stems as questions versus partial sentences; and using the "none-of-the-above" option instead of a specific content option. Results with 228 college students do not support the use of either stem type and provide limited evidence to caution against the "none-of-the-above" option.…
Descriptors: College Students, Higher Education, Multiple Choice Tests, Test Construction
Haladyna, Thomas M.; Roid, Gale H. – Educational Technology, 1983
Summarizes item review in the development of criterion-referenced tests, including logical item review, which examines the match between instructional intent and the items; empirical item review, which examines response patterns; traditional item review; and instructional sensitivity of test items. Twenty-eight references are listed. (MBR)
Descriptors: Criterion Referenced Tests, Educational Research, Literature Reviews, Teaching Methods
Peer reviewed Peer reviewed
Haladyna, Thomas M. – Educational Technology, Research and Development, 1991
Discusses the link between testing and the content of instruction and proposes a method for generating large numbers of objective-relevant test items for instructional programs in high schools, higher education, and training situations. Higher level thinking outcomes are discussed, item sets are described, and generic scenarios and questioning are…
Descriptors: Higher Education, Instructional Design, Questioning Techniques, Secondary Education
Peer reviewed Peer reviewed
Roid, G. H.; Haladyna, Thomas M. – Educational and Psychological Measurement, 1978
Two techniques for writing achievement test items to accompany instructional materials are contrasted: writing items from statements of instructional objectives, and writing items from semi-automated rules for transforming instructional statements. Both systems resulted in about the same number of faulty items. (Author/JKS)
Descriptors: Achievement Tests, Comparative Analysis, Criterion Referenced Tests, Difficulty Level
Peer reviewed Peer reviewed
Haladyna, Thomas M.; Downing, Steven M. – Applied Measurement in Education, 1989
A taxonomy of 43 rules for writing multiple-choice test items is presented, based on a consensus of 46 textbooks. These guidelines are presented as complete and authoritative, with solid consensus apparent for 33 of the rules. Four rules lack consensus, and 5 rules were cited fewer than 10 times. (SLD)
Descriptors: Classification, Interrater Reliability, Multiple Choice Tests, Objective Tests
Haladyna, Thomas M.; And Others – 1987
This paper discusses the development and use of "item shells" in constructing multiple-choice tests. An item shell is a "hollow" item that contains the syntactic structure and context of an item without specific content. Item shells are empirically developed from successfully used items selected from an existing item pool. Use…
Descriptors: Difficulty Level, Health Personnel, Item Banks, Multiple Choice Tests
Sympson, J. Bradford; Haladyna, Thomas M. – 1988
A new approach to polychotomous scoring of test items, similar to "max-alpha" scaling (MAS) and known as polyweighting, has been developed. Unlike MAS, this new method of polychotomous scoring provides scoring weights for a given item that are independent of the difficulty of other items in the analysis. Moreover, the scoring weights are…
Descriptors: Computer Software, Difficulty Level, Item Analysis, Latent Trait Theory
Haladyna, Thomas M.; Downing, Steven M. – 1988
The proposition that the optimal number of options in a multiple choice test item is three was examined. The concept of functional distractor, a plausible wrong answer that is negatively discriminating when total test performance is the criterion, is discussed. Three distinct groups of achievers (high, middle, and low) on a national standardized…
Descriptors: Achievement Tests, Item Analysis, Multiple Choice Tests, Physicians
Crehan, Kevin D.; Haladyna, Thomas M. – 1994
More attention is currently being paid to the distractors of a multiple-choice test item (Thissen, Steinberg, and Fitzpatrick, 1989). A systematic relationship exists between the keyed response and distractors in multiple-choice items (Levine and Drasgow, 1983). New scoring methods have been introduced, computer programs developed, and research…
Descriptors: Comparative Analysis, Computer Assisted Testing, Distractors (Tests), Models
Previous Page | Next Page ยป
Pages: 1  |  2