Publication Date
In 2025 | 8 |
Since 2024 | 25 |
Since 2021 (last 5 years) | 79 |
Since 2016 (last 10 years) | 190 |
Descriptor
Multiple Choice Tests | 190 |
Test Validity | 134 |
Foreign Countries | 97 |
Test Reliability | 88 |
Test Items | 80 |
Test Construction | 72 |
Science Tests | 46 |
Difficulty Level | 40 |
Item Response Theory | 37 |
Item Analysis | 34 |
Undergraduate Students | 34 |
More ▼ |
Source
Author
Biancarosa, Gina | 4 |
Carlson, Sarah E. | 4 |
Davison, Mark L. | 4 |
Liu, Bowen | 4 |
Seipel, Ben | 4 |
Krell, Moritz | 3 |
Alonzo, Julie | 2 |
Anderson, Daniel | 2 |
Chapman, Kate M. | 2 |
Clauser, Brian E. | 2 |
Coniam, David | 2 |
More ▼ |
Publication Type
Education Level
Audience
Teachers | 2 |
Location
Indonesia | 18 |
Turkey | 16 |
Germany | 11 |
China | 6 |
Australia | 5 |
Iran | 5 |
Switzerland | 4 |
Canada | 3 |
Europe | 3 |
Thailand | 3 |
United Kingdom | 3 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Güntay Tasçi – Science Insights Education Frontiers, 2024
The present study has aimed to develop and validate a protein concept inventory (PCI) consisting of 25 multiple-choice (MC) questions to assess students' understanding of protein, which is a fundamental concept across different biology disciplines. The development process of the PCI involved a literature review to identify protein-related content,…
Descriptors: Science Instruction, Science Tests, Multiple Choice Tests, Biology
Sherwin E. Balbuena – Online Submission, 2024
This study introduces a new chi-square test statistic for testing the equality of response frequencies among distracters in multiple-choice tests. The formula uses the information from the number of correct answers and wrong answers, which becomes the basis of calculating the expected values of response frequencies per distracter. The method was…
Descriptors: Multiple Choice Tests, Statistics, Test Validity, Testing
Helen Zhang; Anthony Perry; Irene Lee – International Journal of Artificial Intelligence in Education, 2025
The rapid expansion of Artificial Intelligence (AI) in our society makes it urgent and necessary to develop young students' AI literacy so that they can become informed citizens and critical consumers of AI technology. Over the past decade many efforts have focused on developing curricular materials that make AI concepts accessible and engaging to…
Descriptors: Test Construction, Test Validity, Measures (Individuals), Artificial Intelligence
David G. Schreurs; Jaclyn M. Trate; Shalini Srinivasan; Melonie A. Teichert; Cynthia J. Luxford; Jamie L. Schneider; Kristen L. Murphy – Chemistry Education Research and Practice, 2024
With the already widespread nature of multiple-choice assessments and the increasing popularity of answer-until-correct, it is important to have methods available for exploring the validity of these types of assessments as they are developed. This work analyzes a 20-question multiple choice assessment covering introductory undergraduate chemistry…
Descriptors: Multiple Choice Tests, Test Validity, Introductory Courses, Science Tests
Ng, Emily – International Journal of Adult Education and Technology, 2020
The resources and time constraints of assessing large classes are always weighed up against the validity, reliability, and learning outcomes of the assessment tasks. With the digital revolution in the 21st Century, educators can benefit from computer technology to carry out a large-scale assessment in higher education more efficiently. In this…
Descriptors: Nursing Students, Computer Assisted Testing, Student Evaluation, Multiple Choice Tests
Yaneva, Victoria; Clauser, Brian E.; Morales, Amy; Paniagua, Miguel – Advances in Health Sciences Education, 2022
Understanding the response process used by test takers when responding to multiple-choice questions (MCQs) is particularly important in evaluating the validity of score interpretations. Previous authors have recommended eye-tracking technology as a useful approach for collecting data on the processes test taker's use to respond to test questions.…
Descriptors: Eye Movements, Artificial Intelligence, Scores, Test Interpretation
Lim, Alliyza; Brewer, Neil; Aistrope, Denise; Young, Robyn L. – Autism: The International Journal of Research and Practice, 2023
The Reading the Mind in the Eyes Test (RMET) is a purported theory of mind measure and one that reliably differentiates autistic and non-autistic individuals. However, concerns have been raised about the validity of the measure, with some researchers suggesting that the multiple-choice format of the RMET makes it susceptible to the undue influence…
Descriptors: Theory of Mind, Autism Spectrum Disorders, Test Validity, Multiple Choice Tests
Syahfitri, Jayanti; Firman, Harry; Redjeki, Sri; Sriyati, Siti – International Journal of Instruction, 2019
The purpose of this study was to develop the Critical Thinking Disposition Test in Biology as an alternative instrument in looking at the extent of one's disposition to critical thinking, especially in Biology University. Critical Thinking Disposition Tests in Biology are tests in the form of multiple choice based on biological cases. This…
Descriptors: Biology, Critical Thinking, Science Instruction, Science Tests
Grace C. Tetschner; Sachin Nedungadi – Chemistry Education Research and Practice, 2025
Many undergraduate chemistry students hold alternate conceptions related to resonance--an important and fundamental topic of organic chemistry. To help address these alternate conceptions, an organic chemistry instructor could administer the resonance concept inventory (RCI), which is a multiple-choice assessment that was designed to identify…
Descriptors: Scientific Concepts, Concept Formation, Item Response Theory, Scores
Brundage, Mary Jane; Singh, Chandralekha – Physical Review Physics Education Research, 2023
We discuss the development and validation of the long version of a conceptual multiple-choice survey instrument called the Survey of Thermodynamic Processes and First and Second Laws-Long suitable for introductory physics courses. This version of the survey instrument is a longer version of the original shorter version developed and validated…
Descriptors: Test Construction, Test Validity, Multiple Choice Tests, Thermodynamics
Brian C. Leventhal; Dena Pastor – Educational and Psychological Measurement, 2024
Low-stakes test performance commonly reflects examinee ability and effort. Examinees exhibiting low effort may be identified through rapid guessing behavior throughout an assessment. There has been a plethora of methods proposed to adjust scores once rapid guesses have been identified, but these have been plagued by strong assumptions or the…
Descriptors: College Students, Guessing (Tests), Multiple Choice Tests, Item Response Theory
Douglas-Morris, Jan; Ritchie, Helen; Willis, Catherine; Reed, Darren – Anatomical Sciences Education, 2021
Multiple-choice (MC) anatomy "spot-tests" (identification-based assessments on tagged cadaveric specimens) offer a practical alternative to traditional free-response (FR) spot-tests. Conversion of the two spot-tests in an upper limb musculoskeletal anatomy unit of study from FR to a novel MC format, where one of five tagged structures on…
Descriptors: Multiple Choice Tests, Anatomy, Test Reliability, Difficulty Level
Gorney, Kylie – ProQuest LLC, 2023
Aberrant behavior refers to any type of unusual behavior that would not be expected under normal circumstances. In educational and psychological testing, such behaviors have the potential to severely bias the aberrant examinee's test score while also jeopardizing the test scores of countless others. It is therefore crucial that aberrant examinees…
Descriptors: Behavior Problems, Educational Testing, Psychological Testing, Test Bias
Yalinkilic, Funda; Gul, Seyda – Science Insights Education Frontiers, 2023
The aim of this study is to develop a valid and reliable achievement test on the subject of 'Basic Compounds in the Structure of Living Things'. During the preparation of the draft form of the test, a 32 item-question pool was created by the researchers in the light of the relevant literature. Then, these questions were presented to expert opinion…
Descriptors: Test Construction, Science Achievement, Science Tests, Test Validity
Dönmez, Onur; Akbulut, Yavuz; Telli, Esra; Kaptan, Miray; Özdemir, Ibrahim H.; Erdem, Mukaddes – Education and Information Technologies, 2022
In the current study, we aimed to develop a reliable and valid scale to address individual cognitive load types. Existing scale development studies involved limited number of items without adequate convergent, discriminant and criterion validity checks. Through a multistep correlational study, we proposed a three-factor scale with 13 items to…
Descriptors: Test Construction, Content Validity, Construct Validity, Test Reliability