Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 3 |
Descriptor
Multiple Choice Tests | 9 |
Test Construction | 9 |
Test Items | 5 |
Test Format | 4 |
Scores | 3 |
Test Use | 3 |
Achievement Tests | 2 |
Comparative Testing | 2 |
Computer Assisted Testing | 2 |
Difficulty Level | 2 |
Item Response Theory | 2 |
More ▼ |
Source
Educational Measurement:… | 9 |
Author
Ahmadi, Alireza | 1 |
Albanese, Mark A. | 1 |
Armstrong, Anne-Marie | 1 |
Downing, Steven M. | 1 |
Frisbie, David A. | 1 |
Jimmy de la Torre | 1 |
Jinran Wu | 1 |
Katz, Irvin R. | 1 |
Keehner, Madeleine | 1 |
Mehrpour, Saeed | 1 |
Moloodi, Amirsaeid | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 6 |
Information Analyses | 3 |
Speeches/Meeting Papers | 2 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Watson Glaser Critical… | 1 |
What Works Clearinghouse Rating
Xuelan Qiu; Jimmy de la Torre; You-Gan Wang; Jinran Wu – Educational Measurement: Issues and Practice, 2024
Multidimensional forced-choice (MFC) items have been found to be useful to reduce response biases in personality assessments. However, conventional scoring methods for the MFC items result in ipsative data, hindering the wider applications of the MFC format. In the last decade, a number of item response theory (IRT) models have been developed,…
Descriptors: Item Response Theory, Personality Traits, Personality Measures, Personality Assessment
Rafatbakhsh, Elaheh; Ahmadi, Alireza; Moloodi, Amirsaeid; Mehrpour, Saeed – Educational Measurement: Issues and Practice, 2021
Test development is a crucial, yet difficult and time-consuming part of any educational system, and the task often falls all on teachers. Automatic item generation systems have recently drawn attention as they can reduce this burden and make test development more convenient. Such systems have been developed to generate items for vocabulary,…
Descriptors: Test Construction, Test Items, Computer Assisted Testing, Multiple Choice Tests
Moon, Jung Aa; Keehner, Madeleine; Katz, Irvin R. – Educational Measurement: Issues and Practice, 2019
The current study investigated how item formats and their inherent affordances influence test-takers' cognition under uncertainty. Adult participants solved content-equivalent math items in multiple-selection multiple-choice and four alternative grid formats. The results indicated that participants' affirmative response tendency (i.e., judge the…
Descriptors: Affordances, Test Items, Test Format, Test Wiseness
Rodriguez, Michael C. – Educational Measurement: Issues and Practice, 2005
Multiple-choice items are a mainstay of achievement testing. The need to adequately cover the content domain to certify achievement proficiency by producing meaningful precise scores requires many high-quality items. More 3-option items can be administered than 4- or 5-option items per testing time while improving content coverage, without…
Descriptors: Psychometrics, Testing, Scores, Test Construction

Albanese, Mark A. – Educational Measurement: Issues and Practice, 1993
A comprehensive review is given of evidence, with a bearing on the recommendation to avoid use of complex multiple choice (CMC) items. Avoiding Type K items (four primary responses and five secondary choices) seems warranted, but evidence against CMC in general is less clear. (SLD)
Descriptors: Cues, Difficulty Level, Multiple Choice Tests, Responses

Downing, Steven M. – Educational Measurement: Issues and Practice, 1992
Research on true-false (TF), multiple-choice, and alternate-choice (AC) tests is reviewed, discussing strengths, weaknesses, and the usefulness in classroom and large-scale testing of each. Recommendations are made for improving use of AC items to overcome some of the problems associated with TF items. (SLD)
Descriptors: Comparative Analysis, Educational Research, Multiple Choice Tests, Objective Tests

Norris, Stephen P. – Educational Measurement: Issues and Practice, 1988
A methodology for developing multiple-choice critical thinking tests is described, which attempts to overcome certain problems of validity and fairness associated with such tests. It is concluded that direct evidence on test validity should be gathered using verbal reports of students' thinking on trial items. (TJH)
Descriptors: Beliefs, Critical Thinking, Culture Fair Tests, Elementary Secondary Education

Frisbie, David A. – Educational Measurement: Issues and Practice, 1992
Literature related to the multiple true-false (MTF) item format is reviewed. Each answer cluster of a MTF item may have several true items and the correctness of each is judged independently. MTF tests appear efficient and reliable, although they are a bit harder than multiple choice items for examinees. (SLD)
Descriptors: Achievement Tests, Difficulty Level, Literature Reviews, Multiple Choice Tests

Armstrong, Anne-Marie – Educational Measurement: Issues and Practice, 1993
The effects of test performance of differentially written multiple-choice tests and test takers' cognitive style were studied for 47 graduate students and 35 public school and college teachers. Adhering to test-writing item guidelines resulted in mean scores basically the same for two groups of differing cognitive style. (SLD)
Descriptors: Cognitive Style, College Faculty, Comparative Testing, Graduate Students