Publication Date
In 2025 | 11 |
Since 2024 | 47 |
Descriptor
Multiple Choice Tests | 47 |
Test Items | 47 |
Test Construction | 18 |
Foreign Countries | 17 |
Item Analysis | 17 |
Science Tests | 15 |
Computer Assisted Testing | 12 |
Difficulty Level | 11 |
Test Validity | 11 |
Test Reliability | 10 |
Item Response Theory | 9 |
More ▼ |
Source
Author
Aiman Mohammad Freihat | 1 |
Alexander Kah | 1 |
Alexander Unger | 1 |
Ali Zahabi | 1 |
Alicia A. Stoltenberg | 1 |
Amir Hadifar | 1 |
Amy Morales | 1 |
Anastasia Sofroniou | 1 |
Andrew Gardiner | 1 |
Ann E. Harman | 1 |
Archana Praveen Kumar | 1 |
More ▼ |
Publication Type
Journal Articles | 44 |
Reports - Research | 38 |
Information Analyses | 3 |
Tests/Questionnaires | 3 |
Dissertations/Theses -… | 2 |
Reports - Descriptive | 2 |
Reports - Evaluative | 2 |
Education Level
Audience
Location
Germany | 2 |
Indonesia | 2 |
United Kingdom | 2 |
Bosnia and Herzegovina | 1 |
Cyprus | 1 |
India | 1 |
Japan | 1 |
New Zealand | 1 |
Philippines | 1 |
Taiwan | 1 |
Thailand | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
International English… | 1 |
National Assessment of… | 1 |
Test of English for… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Séverin Lions; María Paz Blanco; Pablo Dartnell; Carlos Monsalve; Gabriel Ortega; Julie Lemarié – Applied Measurement in Education, 2024
Multiple-choice items are universally used in formal education. Since they should assess learning, not test-wiseness or guesswork, they must be constructed following the highest possible standards. Hundreds of item-writing guides have provided guidelines to help test developers adopt appropriate strategies to define the distribution and sequence…
Descriptors: Test Construction, Multiple Choice Tests, Guidelines, Test Items
Janet Mee; Ravi Pandian; Justin Wolczynski; Amy Morales; Miguel Paniagua; Polina Harik; Peter Baldwin; Brian E. Clauser – Advances in Health Sciences Education, 2024
Recent advances in automated scoring technology have made it practical to replace multiple-choice questions (MCQs) with short-answer questions (SAQs) in large-scale, high-stakes assessments. However, most previous research comparing these formats has used small examinee samples testing under low-stakes conditions. Additionally, previous studies…
Descriptors: Multiple Choice Tests, High Stakes Tests, Test Format, Test Items
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Sherwin E. Balbuena – Online Submission, 2024
This study introduces a new chi-square test statistic for testing the equality of response frequencies among distracters in multiple-choice tests. The formula uses the information from the number of correct answers and wrong answers, which becomes the basis of calculating the expected values of response frequencies per distracter. The method was…
Descriptors: Multiple Choice Tests, Statistics, Test Validity, Testing
Mingfeng Xue; Mark Wilson – Applied Measurement in Education, 2024
Multidimensionality is common in psychological and educational measurements. This study focuses on dimensions that converge at the upper anchor (i.e. the highest acquisition status defined in a learning progression) and compares different ways of dealing with them using the multidimensional random coefficients multinomial logit model and scale…
Descriptors: Learning Trajectories, Educational Assessment, Item Response Theory, Evolution
Archana Praveen Kumar; Ashalatha Nayak; Manjula Shenoy K.; Chaitanya; Kaustav Ghosh – International Journal of Artificial Intelligence in Education, 2024
Multiple Choice Questions (MCQs) are a popular assessment method because they enable automated evaluation, flexible administration and use with huge groups. Despite these benefits, the manual construction of MCQs is challenging, time-consuming and error-prone. This is because each MCQ is comprised of a question called the "stem", a…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Semantics
Rekha; Shakeela K. – Journal on School Educational Technology, 2025
The main objective of the present study was to construct and standardize an achievement test in science for the secondary school science students in grade 8. An achievement test having 120 test items was prepared by the facilitator based on the four main learning objectives of teaching science that are knowledge, understanding, application, and…
Descriptors: Test Construction, Standardized Tests, Secondary School Students, Science Achievement
David G. Schreurs; Jaclyn M. Trate; Shalini Srinivasan; Melonie A. Teichert; Cynthia J. Luxford; Jamie L. Schneider; Kristen L. Murphy – Chemistry Education Research and Practice, 2024
With the already widespread nature of multiple-choice assessments and the increasing popularity of answer-until-correct, it is important to have methods available for exploring the validity of these types of assessments as they are developed. This work analyzes a 20-question multiple choice assessment covering introductory undergraduate chemistry…
Descriptors: Multiple Choice Tests, Test Validity, Introductory Courses, Science Tests
Semere Kiros Bitew; Amir Hadifar; Lucas Sterckx; Johannes Deleu; Chris Develder; Thomas Demeester – IEEE Transactions on Learning Technologies, 2024
Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Construction, Test Items
Maristela Petrovic-Dzerdz – Collected Essays on Learning and Teaching, 2024
Large introductory classes, with their expansive curriculum, demand assessment strategies that blend efficiency with reliability, prompting the consideration of multiple-choice (MC) tests as a viable option. Crafting a high-quality MC test, however, necessitates a meticulous process involving reflection on assessment format appropriateness, test…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Alignment (Education)
Grace C. Tetschner; Sachin Nedungadi – Chemistry Education Research and Practice, 2025
Many undergraduate chemistry students hold alternate conceptions related to resonance--an important and fundamental topic of organic chemistry. To help address these alternate conceptions, an organic chemistry instructor could administer the resonance concept inventory (RCI), which is a multiple-choice assessment that was designed to identify…
Descriptors: Scientific Concepts, Concept Formation, Item Response Theory, Scores
Lei Guo; Wenjie Zhou; Xiao Li – Journal of Educational and Behavioral Statistics, 2024
The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees' responses to MC items instead of dichotomously scored…
Descriptors: Multiple Choice Tests, Diagnostic Tests, Accuracy, Computer Software
Frank Feudel; Alexander Unger – International Journal of Mathematical Education in Science and Technology, 2025
In tertiary mathematics courses, students often have difficulties acquiring an understanding of the mathematical concepts covered. One approach to address this problem is to implement so-called Concept-Tests. These are multiple-choice questions whose distractors represent common problems and misconceptions related to the concepts. While there…
Descriptors: College Mathematics, Mathematics Instruction, Mathematical Concepts, Concept Formation
Emily K. Toutkoushian; Huaping Sun; Mark T. Keegan; Ann E. Harman – Measurement: Interdisciplinary Research and Perspectives, 2024
Linear logistic test models (LLTMs), leveraging item response theory and linear regression, offer an elegant method for learning about item characteristics in complex content areas. This study used LLTMs to model single-best-answer, multiple-choice-question response data from two medical subspecialty certification examinations in multiple years…
Descriptors: Licensing Examinations (Professions), Certification, Medical Students, Test Items
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests