Publication Date
In 2025 | 1 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 21 |
Since 2016 (last 10 years) | 29 |
Since 2006 (last 20 years) | 37 |
Descriptor
Comparative Analysis | 50 |
Item Analysis | 50 |
Multiple Choice Tests | 50 |
Test Items | 36 |
Foreign Countries | 20 |
Difficulty Level | 15 |
Test Construction | 13 |
Item Response Theory | 11 |
Statistical Analysis | 10 |
Test Format | 10 |
Undergraduate Students | 10 |
More ▼ |
Source
Author
ALKursheh, Taha Okleh | 1 |
Aesaert, Koen | 1 |
Al-zboon, Habis Saad | 1 |
AlNasraween, Mo'en Salman | 1 |
Alexander Kah | 1 |
Alsma, Jelmer | 1 |
Aminah, Nonoh Siti | 1 |
Babcock, Ben | 1 |
Badeleh, Alireza | 1 |
Baghaei, Purya | 1 |
Barry, Carol | 1 |
More ▼ |
Publication Type
Reports - Research | 40 |
Journal Articles | 36 |
Speeches/Meeting Papers | 8 |
Reports - Descriptive | 2 |
Reports - Evaluative | 2 |
Non-Print Media | 1 |
Reference Materials - General | 1 |
Education Level
Higher Education | 15 |
Postsecondary Education | 15 |
Secondary Education | 8 |
Elementary Education | 5 |
High Schools | 3 |
Elementary Secondary Education | 2 |
Grade 12 | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
More ▼ |
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Lei Guo; Wenjie Zhou; Xiao Li – Journal of Educational and Behavioral Statistics, 2024
The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees' responses to MC items instead of dichotomously scored…
Descriptors: Multiple Choice Tests, Diagnostic Tests, Accuracy, Computer Software
Roger Young; Emily Courtney; Alexander Kah; Mariah Wilkerson; Yi-Hsin Chen – Teaching of Psychology, 2025
Background: Multiple-choice item (MCI) assessments are burdensome for instructors to develop. Artificial intelligence (AI, e.g., ChatGPT) can streamline the process without sacrificing quality. The quality of AI-generated MCIs and human experts is comparable. However, whether the quality of AI-generated MCIs is equally good across various domain-…
Descriptors: Item Response Theory, Multiple Choice Tests, Psychology, Textbooks
Katrin Klingbeil; Fabian Rösken; Bärbel Barzel; Florian Schacht; Kaye Stacey; Vicki Steinle; Daniel Thurm – ZDM: Mathematics Education, 2024
Assessing students' (mis)conceptions is a challenging task for teachers as well as for researchers. While individual assessment, for example through interviews, can provide deep insights into students' thinking, this is very time-consuming and therefore not feasible for whole classes or even larger settings. For those settings, automatically…
Descriptors: Multiple Choice Tests, Formative Evaluation, Mathematics Tests, Misconceptions
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)
Musa Adekunle Ayanwale – Discover Education, 2023
Examination scores obtained by students from the West African Examinations Council (WAEC), and National Business and Technical Examinations Board (NABTEB) may not be directly comparable due to differences in examination administration, item characteristics of the subject in question, and student abilities. For more accurate comparisons, scores…
Descriptors: Equated Scores, Mathematics Tests, Test Items, Test Format
Babcock, Ben; Siegel, Zachary D. – Practical Assessment, Research & Evaluation, 2022
Research about repeated testing has revealed that retaking the same exam form generally does not advantage or disadvantage failing candidates in selected response-style credentialing exams. Feinberg, Raymond, and Haist (2015) found a contributing factor to this phenomenon: people answering items incorrectly on both attempts give the same incorrect…
Descriptors: Multiple Choice Tests, Item Analysis, Test Items, Response Style (Tests)
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Shear, Benjamin R. – Journal of Educational Measurement, 2023
Large-scale standardized tests are regularly used to measure student achievement overall and for student subgroups. These uses assume tests provide comparable measures of outcomes across student subgroups, but prior research suggests score comparisons across gender groups may be complicated by the type of test items used. This paper presents…
Descriptors: Gender Bias, Item Analysis, Test Items, Achievement Tests
Subali, Bambang; Kumaidi; Aminah, Nonoh Siti – International Journal of Instruction, 2021
This research aims at comparing item characteristics of instruments for assessing the level of mastery in scientific method for elementary students as they were analyzed using Classical Test Theory (CTT) and Item Response Theory (IRT). The two analyses are usually done separately, for difference object, in this moment it was analyzed…
Descriptors: Test Items, Item Response Theory, Item Analysis, Comparative Analysis
PaaBen, Benjamin; Dywel, Malwina; Fleckenstein, Melanie; Pinkwart, Niels – International Educational Data Mining Society, 2022
Item response theory (IRT) is a popular method to infer student abilities and item difficulties from observed test responses. However, IRT struggles with two challenges: How to map items to skills if multiple skills are present? And how to infer the ability of new students that have not been part of the training data? Inspired by recent advances…
Descriptors: Item Response Theory, Test Items, Item Analysis, Inferences
ALKursheh, Taha Okleh; Al-zboon, Habis Saad; AlNasraween, Mo'en Salman – International Journal of Instruction, 2022
This study aimed at comparing the effect of two test item formats (multiple-choice and complete) on estimating person's ability, item parameters and the test information function (TIF).To achieve the aim of the study, two format of mathematics(1) test have been created: multiple-choice and complete, In its final format consisted of (31) items. The…
Descriptors: Comparative Analysis, Test Items, Item Response Theory, Test Format
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Lions, Séverin; Dartnell, Pablo; Toledo, Gabriela; Godoy, María Inés; Córdova, Nora; Jiménez, Daniela; Lemarié, Julie – Educational and Psychological Measurement, 2023
Even though the impact of the position of response options on answers to multiple-choice items has been investigated for decades, it remains debated. Research on this topic is inconclusive, perhaps because too few studies have obtained experimental data from large-sized samples in a real-world context and have manipulated the position of both…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Responses
Krall, Geoff – Canadian Journal of Science, Mathematics and Technology Education, 2023
In order to identify the potential benefits and challenges of implementing student portfolios as quality mathematics assessment, a pilot study was conducted with teachers in various secondary school settings. The multi-case study consisted of five teacher participants from geographically and demographically differing contexts, four in the USA and…
Descriptors: Portfolio Assessment, Mathematics Instruction, Evaluation Methods, Pilot Projects