Publication Date
| In 2026 | 0 |
| Since 2025 | 89 |
| Since 2022 (last 5 years) | 457 |
| Since 2017 (last 10 years) | 1245 |
| Since 2007 (last 20 years) | 2519 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 131 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| Germany | 51 |
| United Kingdom | 51 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Kline, Keith – Sierra Club Bulletin, 1979
Presents a 23 question, multiple choice test on energy. Answers are provided. The test is designed for the general public.
Descriptors: Ecology, Energy, Environmental Education, Evaluation
Peer reviewedLinn, Robert L. – Educational and Psychological Measurement, 1976
Testing procedures which involve testees assigning probabilities of correctness to all multiple choice alternatives is examined. Two basic assumptions in these procedures are reviewed. Empirical examinee response data are examined and it is suggested that these assumptions should not be taken lightly in empirical studies of personal probability…
Descriptors: Confidence Testing, Guessing (Tests), Measurement Techniques, Multiple Choice Tests
Peer reviewedWisner, Joel D.; Wisner, Robert J. – Business Education Forum, 1997
Undergraduate business students completed two multiple-choice tests: one in which they indicated their answer and one of three levels of confidence, and one in which they circled the item only when they possessed high confidence in the answer. The three-level test took longer to take and grade; students preferred the second format. Both types…
Descriptors: Business Education, Confidence Testing, Higher Education, Multiple Choice Tests
Peer reviewedFrary, Robert B. – Applied Measurement in Education, 1989
Multiple-choice response and scoring methods that attempt to determine an examinee's degree of knowledge about each item in order to produce a total test score are reviewed. There is apparently little advantage to such schemes; however, they may have secondary benefits such as providing feedback to enhance learning. (SLD)
Descriptors: Knowledge Level, Multiple Choice Tests, Scoring, Scoring Formulas
Peer reviewedHoepfl, Marie C. – Technology Teacher, 1994
Provides guidelines for writing multiple-choice tests and ways to evaluate the quality of test items. (SK)
Descriptors: Item Analysis, Multiple Choice Tests, Teacher Made Tests, Test Construction
Peer reviewedNishisato, Shizuhiko – Psychometrika, 1993
In Guttman-type quantification of contingency tables and multiple-choice data (incidence data), the trivial solution because of marginal constraints is typically removed before quantification. Relevant formulas are presented for cases affected by the trivial solution and those that are not. (SLD)
Descriptors: Classification, Equations (Mathematics), Incidence, Mathematical Models
Peer reviewedTate, Richard L. – Journal of Educational Measurement, 1999
Suggests that a modification of traditional linking is necessary when tests consist of constructed response items judged by raters and a possibility of year-to-year variation in rating discrimination and severity exists. Illustrates this situation with an artificial example. (SLD)
Descriptors: Equated Scores, Interrater Reliability, Item Response Theory, Multiple Choice Tests
Peer reviewedBaranchik, Alvin; Cherkas, Barry – International Journal of Mathematical Education in Science and Technology, 2000
Presents a study involving three sections of pre-calculus (n=181) at four-year college where partial credit scoring on multiple-choice questions was examined over an entire semester. Indicates that grades determined by partial credit scoring seemed more reflective of both the quantity and quality of student knowledge than grades determined by…
Descriptors: Evaluation, Higher Education, Mathematics Education, Multiple Choice Tests
Peer reviewedWood, William C. – Journal of Education for Business, 1998
Describes the technique of linked multiple choice, a hybrid of open-ended and multiple-choice formats. Explains how it combines the testing power of free-response questions with the efficient grading of multiple choice. (SK)
Descriptors: Grading, Multiple Choice Tests, Student Evaluation, Test Items
Peer reviewedWalker, Douglas M.; Thompson, John S. – Assessment & Evaluation in Higher Education, 2001
Compared a standard multiple choice exam format with two modified formats which provide instructors with information on students' risk preferences (students answer questions twice) and confidence in their answers (students assign a point value to questions). Found that while the alternatives offer increased choice to students and low-cost…
Descriptors: Comparative Analysis, Confidence Testing, Evaluation Research, Multiple Choice Tests
Peer reviewedTate, Richard – Journal of Educational Measurement, 2000
Studied the error associated with a proposed linking method for tests consisting of both constructed response and multiple choice items through a simulation study varying several factors. Results support the use of the proposed linking method. Also illustrated possible linking bias resulting from use of the traditional linking method and the use…
Descriptors: Constructed Response, Equated Scores, Multiple Choice Tests, Simulation
Rodriguez, Michael C. – Educational Measurement: Issues and Practice, 2005
Multiple-choice items are a mainstay of achievement testing. The need to adequately cover the content domain to certify achievement proficiency by producing meaningful precise scores requires many high-quality items. More 3-option items can be administered than 4- or 5-option items per testing time while improving content coverage, without…
Descriptors: Psychometrics, Testing, Scores, Test Construction
Breese, Elisabeth L.; Hillis, Argye E. – Brain and Language, 2004
Auditory comprehension is commonly measured with multiple choice tasks. The sensitivity of these tasks in identifying deficits, however, is limited by credit given for correct guesses by forced choice. In this study, we compare performance on the multiple choice task to an alternative word/picture verification task, in 122 subjects with acute left…
Descriptors: Listening Comprehension, Multiple Choice Tests, Brain, Auditory Perception
Brown, Alan S.; Brown, Christine M.; Mosbacher, Joy L.; Dryden, W. Erich – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2006
The negative effects of false information presented either prior to (proactive interference; PI) or following (retroactive interference; RI) true information was examined with word definitions (Experiment 1) and trivia facts (Experiment 2). Participants were explicitly aware of which information was true and false when shown, and true-false…
Descriptors: Multiple Choice Tests, Inhibition, Negative Reinforcement, Definitions
Hayati, A. Majid; Ghojogh, A. Nick – English Language Teaching, 2008
This study investigated the relative frequency of seven TW strategies among Iranian EFL students to find out probable relationship(s) between test-wiseness, proficiency and gender. To do so, out of 138 participants, a total number of 80 undergraduate EFL students from Shahid Chamran University were chosen and divided on the basis of their…
Descriptors: Test Wiseness, English (Second Language), Second Language Learning, Second Language Instruction

Direct link
