Publication Date
| In 2026 | 0 |
| Since 2025 | 89 |
| Since 2022 (last 5 years) | 457 |
| Since 2017 (last 10 years) | 1245 |
| Since 2007 (last 20 years) | 2519 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 131 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| Germany | 51 |
| United Kingdom | 51 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Peer reviewedGrzelkowski, Kathryn P. – Teaching Sociology, 1987
Describes the pedagogical and philosophical reasons for adopting the use of take-home multiple choice exams for introductory sociology courses. Reports students' reactions to the exams and provides a discussion of the grading conflicts which arise due to improved performance. (JDH)
Descriptors: College Instruction, Educational Philosophy, Educational Sociology, Grading
Peer reviewedRoberts, Dennis M. – Journal of Educational Measurement, 1987
This study examines a score-difference model for the detection of cheating based on the difference between two scores for an examinee: one based on the appropriate scoring key and another based on an alternative, inappropriate key. It argues that the score-difference method could falsely accuse students as cheaters. (Author/JAZ)
Descriptors: Answer Keys, Cheating, Mathematical Models, Multiple Choice Tests
Peer reviewedFriel, Michael – System, 1984
Proposes a new type of oral test, based on Grice's Co-operative Principle and its dependent maxims, to measure communicative competence. Describes the design, administration, and scoring of this timed, multiple-choice test and gives sample test materials. Discusses applications to testing languages for special purposes and ways of presenting the…
Descriptors: Communicative Competence (Languages), Language Tests, Languages for Special Purposes, Multiple Choice Tests
Peer reviewedCohen, Andrew D. – Language Testing, 1984
Discusses methods for obtaining verbal report data on second language test-taking strategies. Reports on the findings obtained from unpublished studies dealing with how language learners take reading tests. Concludes that there should be a closer fit between how test constructors intend their tests to be taken and how respondents actually take…
Descriptors: Cloze Procedure, Language Tests, Multiple Choice Tests, Reading Tests
Peer reviewedFabrey, Lawrence J.; Case, Susan M. – Journal of Medical Education, 1985
The effect on test scores of changing answers to multiple-choice questions was studied and compared to earlier research. The current setting was a nationally administered, in-training, specialty examination for medical residents in obstetrics and gynecology. Both low and high scorers improved their scores when they changed answers. (SW)
Descriptors: Educational Testing, Graduate Medical Students, Guessing (Tests), Gynecology
Peer reviewedBradbard, David A.; Green, Samuel B. – Journal of Experimental Education, 1986
The effectiveness of the Coombs elimination procedure was evaluated with 29 college students enrolled in a statistics course. Five multiple-choice tests were employed and scored using the Coombs procedure. Results suggest that the Coombs procedure decreased guessing, and this effect increased over the grading period. (Author/LMO)
Descriptors: Analysis of Variance, College Students, Guessing (Tests), Higher Education
Peer reviewedSkinner, Nicholas F. – Teaching of Psychology, 1983
Because of a belief that the alternatives they had chosen initially were probably correct, most subjects were reluctant to change answers, and, consequently, did so only when they were highly confident in the change. Results were that more than half the changes were correct. (RM)
Descriptors: Decision Making, Educational Research, Females, Higher Education
Peer reviewedFrederiksen, Norman – American Psychologist, 1984
Argues that because widely used multiple choice tests do not measure more complex cognitive skills, such skills are not taught. Suggests that greater costs of tests in other formats can be justified by their value for instruction--to encourage teaching of higher level cognitive skills and provide practice with feedback. (CMG)
Descriptors: Cognitive Ability, Curriculum, Educational Testing, Elementary Secondary Education
Peer reviewedPlake, Barbara S.; Huntley, Renee M. – Educational and Psychological Measurement, 1984
Two studies examined the effect of making the correct answer of a multiple choice test item grammatically consistent with the item. American College Testing Assessment experimental items were constructed to investigate grammatical compliance to investigate grammatical compliance for plural-singular and vowel-consonant agreement. Results suggest…
Descriptors: Grammar, Higher Education, Item Analysis, Multiple Choice Tests
Arizona Department of Education, 2006
Arizona's Instrument to Measure Standards Dual Purpose Assessment (AIMS DPA) is a combination of two separate tests. One test is AIMS, which measures how well the student knows the reading, writing , and mathematics content that all Arizona students at the third grade level are expected to know and be able to do. AIMS includes multiple-choice…
Descriptors: Grade 3, Test Items, Scoring, Academic Standards
Arizona Department of Education, 2006
Arizona's Instrument to Measure Standards Dual Purpose Assessment (AIMS DPA) is a combination of two separate tests. One test is AIMS, which measures how well the student knows the reading, writing, and mathematics content that all Arizona students in the student's grade level are expected to know and be able to do. AIMS DPA includes…
Descriptors: Grade 4, Scoring, Academic Standards, Writing Tests
Arizona Department of Education, 2006
Arizona's Instrument to Measure Standards Dual Purpose Assessment (AIMS DPA) is a combination of two separate tests. One test is AIMS, which measures how well the student knows the reading, writing, and mathematics content that all Arizona students in the student's grade level are expected to know and be able to do. AIMS DPA includes…
Descriptors: Grade 5, Scoring, Academic Standards, Writing Tests
Arizona Department of Education, 2006
Arizona's Instrument to Measure Standards Dual Purpose Assessment (AIMS DPA) is a combination of two separate tests. One test is AIMS, which measures how well students know the reading, writing, and mathematics content that all Arizona students at the sixth grade level are expected to know and be able to do. AIMS DPA includes multiple-choice…
Descriptors: Grade 6, Scoring, Academic Standards, Writing Tests
Schnipke, Deborah L. – 1999
When running out of time on a multiple-choice test such as the Law School Admission Test (LSAT), some test takers are likely to respond rapidly to the remaining unanswered items in an attempt to get some items right by chance. Because these responses will tend to be incorrect, the presence of rapid-guessing behavior could cause these items to…
Descriptors: College Entrance Examinations, Difficulty Level, Estimation (Mathematics), Guessing (Tests)
Johnson, Matthew S.; Sinharay, Sandip – 2003
For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…
Descriptors: Bayesian Statistics, Constructed Response, Educational Assessment, Estimation (Mathematics)


