NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)1
Education Level
What Works Clearinghouse Rating
Showing 1 to 15 of 32 results Save | Export
Paneerselvam, Bavani – ProQuest LLC, 2017
Multiple-choice retrieval practice with additional lures reduces retention on a later test (Roediger & Marsh, 2005). However, the mechanism underlying the negative outcomes with additional lures is poorly understood. Given that the positive outcomes of retrieval practice are associated with enhanced relational and item-specific processing…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Recall (Psychology)
Vacc, Nicholas A.; Loesch, Larry C.; Lubik, Ruth E. – 2001
Multiple choice tests are widely viewed as the most effective and objective means of assessment. Item development is the central component of creating an effective test, but test developers often do not have the background in item development. This document describes recall, application, and analysis, the three cognitive levels of test items. It…
Descriptors: Educational Assessment, Evaluation, Item Analysis, Measures (Individuals)
Miller, Harry G.; Williams, Reed G. – Educational Technology, 1973
Descriptors: Content Analysis, Item Analysis, Measurement Techniques, Multiple Choice Tests
Peer reviewed Peer reviewed
Veale, James R.; Foreman, Dale I. – Journal of Educational Measurement, 1983
Statistical procedures for measuring heterogeneity of test item distractor distributions, or cultural variation, are presented. These procedures are based on the notion that examinees' responses to the incorrect options of a multiple-choice test provide more information concerning cultural bias than their correct responses. (Author/PN)
Descriptors: Ethnic Bias, Item Analysis, Mathematical Models, Multiple Choice Tests
Siskind, Theresa G.; Anderson, Lorin W. – 1982
The study was designed to examine the similarity of response options generated by different item writers using a systematic approach to item writing. The similarity of response options to student responses for the same item stems presented in an open-ended format was also examined. A non-systematic (subject matter expertise) approach and a…
Descriptors: Algorithms, Item Analysis, Multiple Choice Tests, Quality Control
Klein, Stephen P.; Bolus, Roger – 1983
A solution to reduce the likelihood of one examinee copying another's answers on large scale tests that require all examinees to answer the same set of questions is to use multiple test forms that differ in terms of item ordering. This study was conducted to determine whether varying the sequence in which blocks of items were presented to…
Descriptors: Adults, Cheating, Cost Effectiveness, Item Analysis
Lutkus, Anthony D.; Laskaris, George – 1981
Analyses of student responses to Introductory Psychology test questions were discussed. The publisher supplied a two thousand item test bank on computer tape. Instructors selected questions for fifteen item tests. The test questions were labeled by the publisher as factual or conceptual. The semester course used a mastery learning format in which…
Descriptors: Difficulty Level, Higher Education, Item Analysis, Item Banks
Choppin, Bruce – 1982
On well-constructed multiple-choice tests, the most serious threat to measurement is not variation in item discrimination, but the guessing behavior that may be adopted by some students. Ways of ameliorating the effects of guessing are discussed, especially for problems in latent trait models. A new item response model, including an item parameter…
Descriptors: Ability, Algorithms, Guessing (Tests), Item Analysis
Myers, Charles T. – 1978
The viewpoint is expressed that adding to test reliability by either selecting a more homogeneous set of items, restricting the range of item difficulty as closely as possible to the most efficient level, or increasing the number of items will not add to test validity and that there is considerable danger that efforts to increase reliability may…
Descriptors: Achievement Tests, Item Analysis, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Ebel, Robert L. – Educational and Psychological Measurement, 1978
A multiple true-false item is one where a testee has to identify statements as true or false within a cluster (of two or more) of such statements. Clusters are then scored as items. This study showed such a procedure to yield less reliable results than traditional true-false items. (JKS)
Descriptors: Guessing (Tests), Higher Education, Item Analysis, Multiple Choice Tests
Green, Kathy E. – 1983
The purpose of this study was to determine whether item difficulty is significantly affected by language difficulty and response set convergence. Language difficulty was varied by increasing sentence (stem) length, increasing syntactic complexity, and substituting uncommon words for more familiar terms in the item stem. Item wording ranged from…
Descriptors: Difficulty Level, Foreign Countries, Higher Education, Item Analysis
Case, Susan M. – 1988
This study was designed to gather data on the meaning of imprecise terms from items written by physicians for their students and by test committees for national licensure and certification examinations. A total of 32 members of test committees who write examination items for various medical specialty examinations participated in the study. Each…
Descriptors: Definitions, Higher Education, Item Analysis, Licensing Examinations (Professions)
Kuntz, Patricia – 1982
The quality of mathematics multiple choice items and their susceptibility to test wiseness were examined. Test wiseness was defined as "a subject's capacity to utilize the characteristics and formats of the test and/or test taking situation to receive a high score." The study used results of the Graduate Record Examinations Aptitude Test (GRE) and…
Descriptors: Cues, Item Analysis, Multiple Choice Tests, Psychometrics
Waller, Michael I. – 1974
In latent trait models the standard procedure for handling the problem caused by guessing on multiple choice tests is to estimate a parameter which is intended to measure the "guessingness" inherent in an item. Birnbaum's three parameter model, which handles guessing in this manner, ignores individual differences in guessing tendency. This paper…
Descriptors: Goodness of Fit, Guessing (Tests), Individual Differences, Item Analysis
Peer reviewed Peer reviewed
Kolstad, Rosemarie K.; And Others – Journal of Research and Development in Education, 1983
A study compared college students' performance on complex multiple-choice tests with scores on multiple true-false clusters. Researchers concluded that the multiple-choice tests did not accurately measure students' knowledge and that cueing and guessing led to grade inflation. (PP)
Descriptors: Achievement Tests, Difficulty Level, Guessing (Tests), Higher Education
Previous Page | Next Page ยป
Pages: 1  |  2  |  3