Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 20 |
Since 2006 (last 20 years) | 42 |
Descriptor
Multiple Choice Tests | 92 |
Scoring | 92 |
Test Items | 92 |
Test Construction | 32 |
Test Reliability | 20 |
Test Format | 19 |
Computer Assisted Testing | 16 |
Item Analysis | 16 |
Item Response Theory | 16 |
Difficulty Level | 14 |
Foreign Countries | 14 |
More ▼ |
Source
Author
Bennett, Randy Elliot | 5 |
Anderson, Paul S. | 2 |
Frary, Robert B. | 2 |
Haladyna, Thomas M. | 2 |
Kehoe, Jerard | 2 |
Melican, Gerald J. | 2 |
Slepkov, Aaron D. | 2 |
Alicia A. Stoltenberg | 1 |
Ault, Marilyn | 1 |
Bao, Lei | 1 |
Bauer, Daniel | 1 |
More ▼ |
Publication Type
Education Level
Secondary Education | 13 |
Higher Education | 10 |
Postsecondary Education | 9 |
Elementary Education | 7 |
Elementary Secondary Education | 6 |
High Schools | 5 |
Junior High Schools | 4 |
Middle Schools | 4 |
Grade 7 | 3 |
Grade 8 | 2 |
Grade 5 | 1 |
More ▼ |
Location
Arizona | 6 |
Canada | 4 |
California | 2 |
China | 2 |
United States | 2 |
Australia | 1 |
Czech Republic | 1 |
Europe | 1 |
Florida | 1 |
Germany | 1 |
Israel | 1 |
More ▼ |
Laws, Policies, & Programs
National Defense Education Act | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Kunal Sareen – Innovations in Education and Teaching International, 2024
This study examines the proficiency of Chat GPT, an AI language model, in answering questions on the Situational Judgement Test (SJT), a widely used assessment tool for evaluating the fundamental competencies of medical graduates in the UK. A total of 252 SJT questions from the "Oxford Assess and Progress: Situational Judgement" Test…
Descriptors: Ethics, Decision Making, Artificial Intelligence, Computer Software
Guo, Wenjing; Wind, Stefanie A. – Journal of Educational Measurement, 2021
The use of mixed-format tests made up of multiple-choice (MC) items and constructed response (CR) items is popular in large-scale testing programs, including the National Assessment of Educational Progress (NAEP) and many district- and state-level assessments in the United States. Rater effects, or raters' scoring tendencies that result in…
Descriptors: Test Format, Multiple Choice Tests, Scoring, Test Items
Alicia A. Stoltenberg – ProQuest LLC, 2024
Multiple-select multiple-choice items, or multiple-choice items with more than one correct answer, are used to quickly assess content on standardized assessments. Because there are multiple keys to these item types, there are also multiple ways to score student responses to these items. The purpose of this study was to investigate how changing the…
Descriptors: Scoring, Evaluation Methods, Multiple Choice Tests, Standardized Tests
Güntay Tasçi – Science Insights Education Frontiers, 2024
The present study has aimed to develop and validate a protein concept inventory (PCI) consisting of 25 multiple-choice (MC) questions to assess students' understanding of protein, which is a fundamental concept across different biology disciplines. The development process of the PCI involved a literature review to identify protein-related content,…
Descriptors: Science Instruction, Science Tests, Multiple Choice Tests, Biology
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests
Zhai, Xiaoming; Li, Min – International Journal of Science Education, 2021
This study provides a partial-credit scoring (PCS) approach to awarding students' performance on multiple-choice items in science education. The approach is built on "fundamental ideas," the critical pieces of students' understanding and knowledge to solve science problems. We link each option of the items to several specific fundamental…
Descriptors: Scoring, Multiple Choice Tests, Science Tests, Test Items
Slepkov, Aaron D.; Godfrey, Alan T. K. – Applied Measurement in Education, 2019
The answer-until-correct (AUC) method of multiple-choice (MC) testing involves test respondents making selections until the keyed answer is identified. Despite attendant benefits that include improved learning, broad student adoption, and facile administration of partial credit, the use of AUC methods for classroom testing has been extremely…
Descriptors: Multiple Choice Tests, Test Items, Test Reliability, Scores
Tomkowicz, Joanna; Kim, Dong-In; Wan, Ping – Online Submission, 2022
In this study we evaluated the stability of item parameters and student scores, using the pre-equated (pre-pandemic) parameters from Spring 2019 and post-equated (post-pandemic) parameters from Spring 2021 in two calibration and equating designs related to item parameter treatment: re-estimating all anchor parameters (Design 1) and holding the…
Descriptors: Equated Scores, Test Items, Evaluation Methods, Pandemics
Zhou, Shao-Na; Liu, Qiao-Yi; Koenig, Kathleen; Xiao, Qiu-ye Li-Yang; Bao, Lei – Journal of Baltic Science Education, 2021
The Lawson's Classroom Test of Scientific Reasoning (LCTSR) is a popular instrument that measures the development of students' scientific reasoning skills. The instrument has a two-tier question design, which has led to multiple ways of scoring and interpretation. In this research, a method of pattern analysis was proposed and applied to analyze…
Descriptors: Science Tests, Science Process Skills, Logical Thinking, Multiple Choice Tests
Schilling, Jim F. – Athletic Training Education Journal, 2019
Context: The accuracy of summative assessment scoring and discriminating the level of knowledge in subject matter is critical in fairness to learners in health care professional programs and to ensure stakeholders of competent providers. An evidence-based approach to determine examination quality for the assessment of applied knowledge is…
Descriptors: Athletics, Allied Health Occupations Education, Test Items, Questioning Techniques
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Türkoguz, Suat – International Education Studies, 2020
This study aims to investigate the test scores of the three-tier diagnostic chemistry test (TDCT)[three-tier diagnostic tests] and multiple choice chemistry test (MCCT) by response change behaviour (RCB). The study is a descriptive research study aiming to investigate the item response efforts of TDCT[three-tier diagnostic tests] and MCCT in a…
Descriptors: Chemistry, Science Instruction, Scientific Concepts, Teaching Methods
Coniam, David; Lee, Tony; Milanovic, Michael; Pike, Nigel; Zhao, Wen – Language Education & Assessment, 2022
The calibration of test materials generally involves the interaction between empirical analysis and expert judgement. This paper explores the extent to which scale familiarity might affect expert judgement as a component of test validation in the calibration process. It forms part of a larger study that investigates the alignment of the…
Descriptors: Specialists, Language Tests, Test Validity, College Faculty
Lin, Chih-Kai – Language Assessment Quarterly, 2018
With multiple options to choose from, there is always a chance of lucky guessing by examinees on multiple-choice (MC) items, thereby potentially introducing bias in item difficulty estimates. Correct responses by random guessing thus pose threats to the validity of claims made from test performance on an MC test. Under the Rasch framework, the…
Descriptors: Guessing (Tests), Item Response Theory, Multiple Choice Tests, Language Tests
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items