Publication Date
| In 2026 | 0 |
| Since 2025 | 89 |
| Since 2022 (last 5 years) | 457 |
| Since 2017 (last 10 years) | 1245 |
| Since 2007 (last 20 years) | 2519 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 131 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| Germany | 51 |
| United Kingdom | 51 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Thomson, Peter – Australian Sci Teachers J, 1970
Descriptors: Achievement Tests, Essay Tests, Evaluation, Multiple Choice Tests
Peer reviewedAiken, Lewis R. – Educational and Psychological Measurement, 1982
Five types of multiple-choice items that can be used to assess higher-order educational objectives are examined. The item types do not exhaust the possibilities, but they are standard forms found helpful in writing items to measure more than recognitive memory. (Author/CM)
Descriptors: Cognitive Measurement, Educational Objectives, Evaluation Methods, Multiple Choice Tests
Peer reviewedAbu-Sayf, F. K. – Educational Review, 1979
The purpose of this article is to discuss some recent developments in the scoring of multiple-choice items from two angles. The first consists of the recent developments in the test instructions of the conventional scoring procedures, and the second consists of a discussion of new scoring methods and formulas. (Author)
Descriptors: Confidence Testing, Guessing (Tests), Measurement Objectives, Multiple Choice Tests
Peer reviewedMcMeen, George R.; Thorman, Joseph H. – Clearing House, 1981
Citing the importance to students of immediate feedback on multiple choice test items, the author discusses four electronic or mechanical devices for rapid test scoring. (SJL)
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Feedback, Multiple Choice Tests
Peer reviewedGay, Lorraine R. – Journal of Educational Measurement, 1980
The influence of test format on retention of research concepts and procedures on a final examination was investigated. The test formats studied were multiple choice and short answer. (Author/JKS)
Descriptors: Higher Education, Multiple Choice Tests, Retention (Psychology), Student Attitudes
Peer reviewedPaterson, Ellen R. – RQ, 1979
A multiple-choice pretest, given to 113 biology and 58 health education college students, measured their knowledge (before library instruction) of specific information sources required to complete course assignments. Mixed tabulated results show that prior instruction, in a class or tour of the library, did not guarantee higher mean scores. (JD)
Descriptors: Biology, College Students, Health Education, Library Instruction
Peer reviewedPfeffer, W. F.; And Others – American Mathematical Monthly, 1979
A description is given of an experiment on methods of testing, conducted with a mathematics course at University of California - Davis. (PK)
Descriptors: College Mathematics, Educational Testing, Group Testing, Mathematics Education
Peer reviewedKim, Jee-Seon; Hanson, Bradley A. – Applied Psychological Measurement, 2002
Presents a characteristic curve procedure for comparing transformations of the item response theory ability scale assuming the multiple-choice model. Illustrates the use of the method with an example equating American College Testing mathematics tests. (SLD)
Descriptors: Ability, Equated Scores, Item Response Theory, Mathematics Tests
Peer reviewedWainer, Howard; Lukhele, Robert – Applied Measurement in Education, 1997
The screening for flaws done for multiple-choice items is often not done for large items. Examines continuous item weighting as a way to manage the influence of differential item functioning (DIF). Data from the College Board Advanced Placement History Test are used to illustrate the method. (SLD)
Descriptors: Advanced Placement, College Entrance Examinations, History, Item Bias
Peer reviewedBellezza, Francis, S.; Bellezza, Suzanne, F. – Teaching of Psychology, 1989
Describes a statistical procedure that detects cheating by comparing answers for pairs of students using those items on which both made errors. If the number of identical wrong answers is greater than the number expected by chance, cheating is likely. Suggests using procedure to discourage cheating. (KO)
Descriptors: Cheating, Classroom Techniques, Educational Research, Higher Education
Peer reviewedHarris, Deborah J.; Kolen, Michael J. – Educational and Psychological Measurement, 1990
An Angoff method and a frequency estimation equipercentile equating method were compared, using data from three forms of a 200-item multiple-choice certification test. Data requirements are fewer and computational requirements less burdensome for the former than for the latter method. However, results of the two methods are not interchangeable.…
Descriptors: Comparative Analysis, Computation, Equated Scores, Licensing Examinations (Professions)
Peer reviewedWilhite, Stephen C. – Journal of Reading Behavior, 1988
Examines the effect of headings on memory as a function of readers' preexisting knowledge of passage topic. Employs a multiple-choice retention test. Finds that headings facilitate recognition memory by activating schemas and that such organizational effects of headings benefit main-idea information more than detail information. (RS)
Descriptors: Higher Education, Multiple Choice Tests, Prior Learning, Reading Research
Peer reviewedFoos, Paul W.; Fisher, Ronald P. – Journal of Educational Psychology, 1988
A study involving 105 undergraduates assessed the value of testing as a means of increasing, rather than simply monitoring, learning. Results indicate that fill-in-the-blank and items requiring student inferences were more effective, respectively, than multiple-choice tests and verbatim items in furthering student learning. (TJH)
Descriptors: Advance Organizers, Higher Education, Multiple Choice Tests, Reading Tests
Peer reviewedHaladyna, Thomas M.; Downing, Steven M. – Applied Measurement in Education, 1989
Results of 96 theoretical/empirical studies were reviewed to see if they support a taxonomy of 43 rules for writing multiple-choice test items. The taxonomy is the result of an analysis of 46 textbooks dealing with multiple-choice item writing. For nearly half of the rules, no research was found. (SLD)
Descriptors: Classification, Literature Reviews, Multiple Choice Tests, Test Construction
Harman, Susan – Phi Delta Kappan, 1992
The structure and power of certain reading, mathematics, and psychological tests prevent educators from really seeing children and how they think. Furthermore, the curriculum has been corrupted into nothing more than practice for the tests. Adults rarely ask children what their thinking was. This drill-and-kill approach reveals little truth about…
Descriptors: Elementary Education, Learning Problems, Multiple Choice Tests, Reading Difficulties


