Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 2 |
Descriptor
Multiple Choice Tests | 22 |
Test Validity | 22 |
Test Wiseness | 22 |
Test Construction | 12 |
Guessing (Tests) | 10 |
Test Reliability | 10 |
Test Items | 9 |
Higher Education | 8 |
Response Style (Tests) | 7 |
Item Analysis | 5 |
Reading Comprehension | 4 |
More ▼ |
Source
Journal of Educational… | 4 |
Journal of Experimental… | 2 |
Educational and Psychological… | 1 |
Journal of Educational… | 1 |
Journal of Optometric… | 1 |
Language Testing | 1 |
Nursing Outlook | 1 |
TESL-EJ | 1 |
Author
Allan, Alistair | 1 |
Brown, Thomas A. | 1 |
Cross, Lawrence | 1 |
Diedenhofen, Birk | 1 |
Fagley, N. S. | 1 |
Farr, Roger | 1 |
Frary, Robert | 1 |
Gross, Leon J. | 1 |
Hanna, Gerald | 1 |
Hanna, Gerald S. | 1 |
Harvill, Leo M. | 1 |
More ▼ |
Publication Type
Reports - Research | 14 |
Journal Articles | 10 |
Speeches/Meeting Papers | 4 |
Guides - Classroom - Teacher | 1 |
Guides - General | 1 |
Opinion Papers | 1 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Researchers | 2 |
Practitioners | 1 |
Location
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
Comprehensive Tests of Basic… | 1 |
Stanford Achievement Tests | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Papenberg, Martin; Diedenhofen, Birk; Musch, Jochen – Journal of Experimental Education, 2021
Testwiseness may introduce construct-irrelevant variance to multiple-choice test scores. Presenting response options sequentially has been proposed as a potential solution to this problem. In an experimental validation, we determined the psychometric properties of a test based on the sequential presentation of response options. We created a strong…
Descriptors: Test Wiseness, Test Validity, Test Reliability, Multiple Choice Tests
Toker, Deniz – TESL-EJ, 2019
The central purpose of this paper is to examine validity problems arising from the multiple-choice items and technical passages in the Test of English as a Foreign Language Internet-based Test (TOEFL iBT) reading section, primarily concentrating on construct-irrelevant variance (Messick, 1989). My personal TOEFL iBT experience, along with my…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Scherich, Henry; Hanna, Gerald – 1976
The reading comprehension items for the Nelson Reading Skills Test, a revision of a widely used standardized reading test, were administered to several hundred fourth- and sixth-grade students in order to determine whether the student's ability to answer correctly actually depended on his comprehension of the accompanying passage. All the…
Descriptors: Elementary Education, Multiple Choice Tests, Reading Comprehension, Reading Tests
Pyrczak, Fred – 1973
The general purpose of this study was to determine the effects of similarities between stems and keyed choices on test difficulty. Unlike previous investigations of this undesirable characteristic of some multiple-choice items, the present study employed items that were unintentionally faulty and samples of examinees who were highly experienced…
Descriptors: Item Analysis, Multiple Choice Tests, Research Reports, Test Construction

Fagley, N. S. – Journal of Educational Psychology, 1987
This article investigates positional response bias, testwiseness, and guessing strategy as components of variance in test responses on multiple-choice tests. University students responded to two content exams, a testwiseness measure, and a guessing strategy measure. The proportion of variance in test scores accounted for by positional response…
Descriptors: Achievement Tests, Guessing (Tests), Higher Education, Multiple Choice Tests

Plake, Barbara S.; Huntley, Renee M. – Educational and Psychological Measurement, 1984
Two studies examined the effect of making the correct answer of a multiple choice test item grammatically consistent with the item. American College Testing Assessment experimental items were constructed to investigate grammatical compliance to investigate grammatical compliance for plural-singular and vowel-consonant agreement. Results suggest…
Descriptors: Grammar, Higher Education, Item Analysis, Multiple Choice Tests

Gross, Leon J. – Journal of Optometric Education, 1982
A critique of a variety of formats used in combined-response test items (those in which the respondent must choose the correct combination of options: a and b, all of the above, etc.) illustrates why this kind of testing is inherently flawed and should not be used in optometry examinations. (MSE)
Descriptors: Higher Education, Multiple Choice Tests, Optometry, Standardized Tests

Allan, Alistair – Language Testing, 1992
The design of a valid and reliable test of test-wiseness is reported: a 33-item multiple-choice instrument with 4 subscales trialed with several groups of English-as-a-Second-Language students. Findings indicate differential skills in test-taking; some learner scores are influenced by skills that are not the focus of the test. (13 references)…
Descriptors: English (Second Language), Language Research, Language Tests, Multiple Choice Tests

Weiten, Wayne – Journal of Experimental Education, 1984
The effects of violating four item construction principles were examined to assess the validity of the principles and the importance of students' test wiseness. While flawed items were significantly less difficult than sound items, differences in item discrimination, test reliability, and concurrent validity were not observed. (Author/BW)
Descriptors: Difficulty Level, Higher Education, Item Analysis, Multiple Choice Tests

Smith, Jeffrey K. – Journal of Educational Measurement, 1982
Two studies examined the extent to which test takers use plausibility as a method for locating correct responses when guessing and the extent to which scores can be improved by teaching test takers this approach. Results confirm that this aspect of multiple choice items merits further consideration by test constructors. (Author/BW)
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Scores
Harvill, Leo M. – 1984
The objectives for this study were to: (1) develop a valid, reliable measure of test-wiseness with equivalent forms for use with students in the health sciences; and (2) determine the level of test-wiseness of entering medical students. The test-wiseness areas included in this study were: similar options, umbrella term, item give-away, convergence…
Descriptors: Higher Education, Measurement Techniques, Medical Students, Multiple Choice Tests

Jacobs, Stanley S. – Journal of Educational Measurement, 1975
Descriptors: Criterion Referenced Tests, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Landa, Suzanne – 1976
Instructions are provided on how to create and take computer aided admissible probability measurement (CAAPM) tests using programs available on PLATO IV. Admissible probability measurement is a testing procedure that permits a user to express a degree of uncertainty as to the correctness of alternative answers. Section II describes PLATO IV, an…
Descriptors: Computer Oriented Programs, Confidence Testing, Multiple Choice Tests, Online Systems
Hanna, Gerald S. – 1974
Although the "Don't Know" (DK) option has received telling criticism in maximum performance summative tests, its potential use in formative evaluation was considered and judged to be more promising. The pretest of an instructional module was administered with DK options. Examinees were then required to answer each question to which they had…
Descriptors: Formative Evaluation, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)

Cross, Lawrence; Frary, Robert – Journal of Educational Measurement, 1977
Corrected-for-guessing scores on multiple-choice tests depend upon the ability and willingness of examinees to guess when they have some basis for answering, and to avoid guessing when they have no basis. The present study determined the extent to which college students were able and willing to comply with formula-scoring directions. (Author/CTM)
Descriptors: Guessing (Tests), Higher Education, Individual Characteristics, Multiple Choice Tests
Previous Page | Next Page ยป
Pages: 1 | 2