NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 4,734 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Junhuan Wei; Qin Wang; Buyun Dai; Yan Cai; Dongbo Tu – Journal of Educational Measurement, 2024
Traditional IRT and IRTree models are not appropriate for analyzing the item that simultaneously consists of multiple-choice (MC) task and constructed-response (CR) task in one item. To address this issue, this study proposed an item response tree model (called as IRTree-MR) to accommodate items that contain different response types at different…
Descriptors: Item Response Theory, Models, Multiple Choice Tests, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Séverin Lions; María Paz Blanco; Pablo Dartnell; Carlos Monsalve; Gabriel Ortega; Julie Lemarié – Applied Measurement in Education, 2024
Multiple-choice items are universally used in formal education. Since they should assess learning, not test-wiseness or guesswork, they must be constructed following the highest possible standards. Hundreds of item-writing guides have provided guidelines to help test developers adopt appropriate strategies to define the distribution and sequence…
Descriptors: Test Construction, Multiple Choice Tests, Guidelines, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rita Arfi Astuti Ningroom; Sri Yamtinah; Riyadi – Journal of Education and Learning (EduLearn), 2025
There are a lot of very interesting scientific concepts to learn in natural and social science. The initial concepts that the student possesses may contradict the actual concepts, which is what causes misconceptions. Misconceptions are identified using misconception detection test tools. In fact, the development of the use of diagnostic test…
Descriptors: Foreign Countries, Test Construction, Diagnostic Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kentaro Fukushima; Nao Uchida; Kensuke Okada – Journal of Educational and Behavioral Statistics, 2025
Diagnostic tests are typically administered in a multiple-choice (MC) format due to their advantages of objectivity and time efficiency. The MC-deterministic input, noisy "and" gate (DINA) family of models, a representative class of cognitive diagnostic models for MC items, efficiently and parsimoniously estimates the mastery profiles of…
Descriptors: Diagnostic Tests, Cognitive Measurement, Multiple Choice Tests, Educational Assessment
Kala Krishna; Pelin Akyol; Esma Ozer – National Bureau of Economic Research, 2025
Exams are designed to rank students objectively by their abilities, including elements such as time limits, the number and difficulty of questions, and negative marking policies. Using data from a lab-in-field experiment, we develop and estimate a model of student behavior in multiple-choice exams that incorporates the effects of time constraints…
Descriptors: Multiple Choice Tests, Student Behavior, Response Style (Tests), Time
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Yuan Ge – Measurement: Interdisciplinary Research and Perspectives, 2024
Mixed-format assessments made up of multiple-choice (MC) items and constructed response (CR) items that are scored using rater judgments include unique psychometric considerations. When these item types are combined to estimate examinee achievement, information about the psychometric quality of each component can depend on that of the other. For…
Descriptors: Interrater Reliability, Test Bias, Multiple Choice Tests, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Janet Mee; Ravi Pandian; Justin Wolczynski; Amy Morales; Miguel Paniagua; Polina Harik; Peter Baldwin; Brian E. Clauser – Advances in Health Sciences Education, 2024
Recent advances in automated scoring technology have made it practical to replace multiple-choice questions (MCQs) with short-answer questions (SAQs) in large-scale, high-stakes assessments. However, most previous research comparing these formats has used small examinee samples testing under low-stakes conditions. Additionally, previous studies…
Descriptors: Multiple Choice Tests, High Stakes Tests, Test Format, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Sherwin E. Balbuena – Online Submission, 2024
This study introduces a new chi-square test statistic for testing the equality of response frequencies among distracters in multiple-choice tests. The formula uses the information from the number of correct answers and wrong answers, which becomes the basis of calculating the expected values of response frequencies per distracter. The method was…
Descriptors: Multiple Choice Tests, Statistics, Test Validity, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Christopher Wheatley; James Wells; John Stewart – Physical Review Physics Education Research, 2024
The Brief Electricity and Magnetism Assessment (BEMA) is a multiple-choice instrument commonly used to measure introductory undergraduate students' conceptual understanding of electricity and magnetism. This study used a network analysis technique called modified module analysis-partial (MMA-P) to identify clusters of correlated responses, also…
Descriptors: Multiple Choice Tests, Energy, Magnets, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Patricia Dowsett; Nathanael Reinertsen – Australian Journal of Language and Literacy, 2023
Senior secondary Literature courses in Australia all aim, to various extents, to develop students' critical literacy skills. These aims share emphases on reading, reflecting and responding critically to texts, on critical analysis and critical ideas, and on forming interpretations informed by critical perspectives. Critical literacy is not…
Descriptors: Foreign Countries, High School Students, Literacy, Multiple Choice Tests
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Peer reviewed Peer reviewed
Direct linkDirect link
Mingfeng Xue; Mark Wilson – Applied Measurement in Education, 2024
Multidimensionality is common in psychological and educational measurements. This study focuses on dimensions that converge at the upper anchor (i.e. the highest acquisition status defined in a learning progression) and compares different ways of dealing with them using the multidimensional random coefficients multinomial logit model and scale…
Descriptors: Learning Trajectories, Educational Assessment, Item Response Theory, Evolution
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Berenbon, Rebecca F.; McHugh, Bridget C. – Educational Measurement: Issues and Practice, 2023
To assemble a high-quality test, psychometricians rely on subject matter experts (SMEs) to write high-quality items. However, SMEs are not typically given the opportunity to provide input on which content standards are most suitable for multiple-choice questions (MCQs). In the present study, we explored the relationship between perceived MCQ…
Descriptors: Test Items, Multiple Choice Tests, Standards, Difficulty Level
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  316