Publication Date
| In 2026 | 0 |
| Since 2025 | 85 |
| Since 2022 (last 5 years) | 453 |
| Since 2017 (last 10 years) | 1241 |
| Since 2007 (last 20 years) | 2515 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 131 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| United Kingdom | 51 |
| Germany | 50 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Peer reviewedJodoin, Michael G. – Journal of Educational Measurement, 2003
Analyzed examinee responses to conventional (multiple-choice) and innovative item formats in a computer-based testing program for item response theory (IRT) information with the three parameter and graded response models. Results for more than 3,000 adult examines for 2 tests show that the innovative item types in this study provided more…
Descriptors: Ability, Adults, Computer Assisted Testing, Item Response Theory
Peer reviewedSchlatter, Mark D. – Primus, 2002
Discusses one way of addressing the difficulty of mastering a large number of concepts through the use of ConcepTests; that is, multiple choice questions given in a lecture that test understanding as opposed to calculation. Investigates various types of ConcepTests and the material they can cover. (Author/KHR)
Descriptors: Calculus, Concept Formation, Evaluation, Group Activities
Peer reviewedStientjes, Marcia K. – International Journal of Listening, 1998
Compares video and audio stimuli for listening assessment as well as multiple choice and constructed response modes, using modifications of the ACT Work Keys Listening and Writing assessment and the ACT Work Keys Listening for Understanding prototype. Confirms that examinees do less well on constructed response than on multiple choice responses.…
Descriptors: Communication Research, Constructed Response, Evaluation Methods, Higher Education
Peer reviewedAlbanese, Mark A.; Jacobs, Richard M. – Evaluation and the Health Professions, 1990
The reliability and validity of a procedure to measure diagnostic-reasoning and problem-solving skills taught in predoctoral orthodontic education were studied using 68 second year dental students. The procedure includes stimulus material and 33 multiple-choice items. It is a feasible way of assessing problem-solving skills in dentistry education…
Descriptors: Clinical Diagnosis, Dental Students, Higher Education, Multiple Choice Tests
Peer reviewedFischer, Florence E. – Educational Research Quarterly, 1988
This study examined the effects of different types of directions on the guessing behavior on multiple-choice tests of 43 male and 39 female fifth-graders. Areas investigated included reward and penalty clauses and test anxiety. Students' understanding of directions was the only significant factor discovered. (TJH)
Descriptors: Elementary Education, Elementary School Students, Grade 5, Guessing (Tests)
Peer reviewedPisacano, Nicholas J.; And Others – Academic Medicine, 1989
A system of medical classification based on the dimensions of body system, etiology, and stage of disease was evaluated by classifying the content of one specialty board's examinations. This classification system may allow a board to define the content of its examinations, monitor requirements for certification, and communicate its standards.…
Descriptors: Certification, Classification, Content Validity, Family Practice (Medicine)
Peer reviewedHead, Martha H.; And Others – Reading Research and Instruction, 1989
Investigates effects of topic interest, writing ability, and summarization training on seventh-grade subjects' performance on multiple-choice tests and ability to summarize a social studies text. Finds that multiple-choice and summarization measures overlap little in the kinds of text comprehension they assess. (RS)
Descriptors: Grade 7, Intermediate Grades, Measurement Techniques, Measures (Individuals)
Peer reviewedWilcox, Rand R.; And Others – Journal of Educational Measurement, 1988
The second response conditional probability model of decision-making strategies used by examinees answering multiple choice test items was revised. Increasing the number of distractors or providing distractors giving examinees (N=106) the option to follow the model improved results and gave a good fit to data for 29 of 30 items. (SLD)
Descriptors: Cognitive Tests, Decision Making, Mathematical Models, Multiple Choice Tests
Peer reviewedKe, Chuanren – Foreign Language Annals, 1995
This study investigated the nature of progress made in learning Chinese as a Foreign Language by adult learners in an intensive summer program in an American university setting. Chinese language ability of 22 students was measured by a multiple choice test and an interview test. Data are interpreted and discussed in terms of implications for…
Descriptors: Adults, Chinese, Interviews, Language Proficiency
Peer reviewedWang, Xiang-bo; And Others – Applied Measurement in Education, 1995
An experiment is reported in which 225 high school students were asked to choose among several multiple-choice items but then were required to answer them all. It is concluded that allowing choice while having fair tests is only possible when choice is irrelevant in terms of difficulty. (SLD)
Descriptors: Adaptive Testing, Difficulty Level, Equated Scores, High School Students
Peer reviewedCrites, Terry – Elementary School Journal, 1992
A total of 36 third, fifth, and seventh graders were interviewed about strategies they used to solve multiple-choice questions involving estimation of discrete quantities. Successful estimators tended to use decomposition/recomposition and multiple benchmark strategies, whereas less successful estimators generally used perceptually based…
Descriptors: Benchmarking, Elementary Education, Elementary School Students, Estimation (Mathematics)
Peer reviewedGreenberg, Karen L. – WPA: Writing Program Administration, 1992
Elaborates on and responds to challenges of direct writing assessment. Speculates on future directions in writing assessment. Suggests that, if writing instructors accept that writing is a multidimensional, situational construct that fluctuates across a wide variety of contexts, then they must also respect the complexity of teaching and testing…
Descriptors: Essay Tests, Higher Education, Multiple Choice Tests, Test Format
Peer reviewedMiller, James H.; Shiehl, Virginia – Teaching Exceptional Children, 1992
A list of materials and construction instructions are provided for making a Buzz Stick and Question Box, an inexpensive, technologically advanced teaching device that serves to reinforce classroom instruction with a self-correcting, independent activity. Any activity that uses a multiple-choice format is appropriate for the device. (JDD)
Descriptors: Construction (Process), Construction Materials, Elementary Secondary Education, Instructional Materials
Popham, W. James – Phi Delta Kappan, 1993
For 50 years, large-scale educational achievement testing in the United States was dominated by a (low-cost) multiple-choice assessment strategy. The problem is finding adequate financial resources to support constructed-response methods used in authentic or performance testing. Matrix sampling, featuring low-proportion sampling of both students…
Descriptors: Costs, Criterion Referenced Tests, Educational Finance, Elementary Secondary Education
Peer reviewedSireci, Stephen G.; Geisinger, Kurt F. – Applied Psychological Measurement, 1992
A new method for evaluating the content representation of a test is illustrated. Item similarity ratings were obtained from three content domain experts to assess whether ratings corresponded to item groupings specified in the test blueprint. Multidimensional scaling and cluster analysis provided substantial information about the test's content…
Descriptors: Cluster Analysis, Content Analysis, Multidimensional Scaling, Multiple Choice Tests


