Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 8 |
Since 2016 (last 10 years) | 18 |
Since 2006 (last 20 years) | 35 |
Descriptor
Evaluation Methods | 49 |
Multiple Choice Tests | 49 |
Test Items | 49 |
Test Construction | 22 |
Student Evaluation | 20 |
Difficulty Level | 11 |
Item Response Theory | 11 |
Item Analysis | 10 |
Test Validity | 9 |
Foreign Countries | 6 |
Mathematics Tests | 6 |
More ▼ |
Source
Author
Ahmed, Wondimu | 1 |
Aiken, Lewis R. | 1 |
Akarsu, Bayram | 1 |
Alicia A. Stoltenberg | 1 |
Anthony Petrosino | 1 |
Armenski, Goce | 1 |
Avsec, Stanislav | 1 |
Azevedo, Jose | 1 |
Babo, Lurdes | 1 |
Ball, Deborah Loewenberg | 1 |
Barry, Carol | 1 |
More ▼ |
Publication Type
Education Level
Elementary Education | 10 |
Higher Education | 10 |
Secondary Education | 10 |
Postsecondary Education | 9 |
Elementary Secondary Education | 6 |
Middle Schools | 5 |
High Schools | 4 |
Junior High Schools | 3 |
Grade 10 | 2 |
Grade 8 | 2 |
Grade 11 | 1 |
More ▼ |
Audience
Practitioners | 2 |
Teachers | 2 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
National Assessment of… | 1 |
State of Texas Assessments of… | 1 |
What Works Clearinghouse Rating
Maristela Petrovic-Dzerdz – Collected Essays on Learning and Teaching, 2024
Large introductory classes, with their expansive curriculum, demand assessment strategies that blend efficiency with reliability, prompting the consideration of multiple-choice (MC) tests as a viable option. Crafting a high-quality MC test, however, necessitates a meticulous process involving reflection on assessment format appropriateness, test…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Alignment (Education)
Yi-Chun Chen; Hsin-Kai Wu; Ching-Ting Hsin – Research in Science & Technological Education, 2024
Background and Purpose: As a growing number of instructional units have been developed to promote young children's scientific and engineering practices (SEPs), understanding how to evaluate and assess children's SEPs is imperative. However, paper-and-pencil assessments would not be suitable for young children because of their limited reading and…
Descriptors: Science Education, Engineering Education, Elementary School Students, Middle School Students
Alicia A. Stoltenberg – ProQuest LLC, 2024
Multiple-select multiple-choice items, or multiple-choice items with more than one correct answer, are used to quickly assess content on standardized assessments. Because there are multiple keys to these item types, there are also multiple ways to score student responses to these items. The purpose of this study was to investigate how changing the…
Descriptors: Scoring, Evaluation Methods, Multiple Choice Tests, Standardized Tests
Qian Liu; Navé Wald; Chandima Daskon; Tony Harland – Innovations in Education and Teaching International, 2024
This qualitative study looks at multiple-choice questions (MCQs) in examinations and their effectiveness in testing higher-order cognition. While there are claims that MCQs can do this, we consider many assertions problematic because of the difficulty in interpreting what higher-order cognition consists of and whether or not assessment tasks…
Descriptors: Multiple Choice Tests, Critical Thinking, College Faculty, Student Evaluation
Walter M. Stroup; Anthony Petrosino; Corey Brady; Karen Duseau – North American Chapter of the International Group for the Psychology of Mathematics Education, 2023
Tests of statistical significance often play a decisive role in establishing the empirical warrant of evidence-based research in education. The results from pattern-based assessment items, as introduced in this paper, are categorical and multimodal and do not immediately support the use of measures of central tendency as typically related to…
Descriptors: Statistical Significance, Comparative Analysis, Research Methodology, Evaluation Methods
Thomas Bickerton, Robert; Sangwin, Chris J. – International Journal of Mathematical Education in Science and Technology, 2022
We discuss a practical method for assessing mathematical proof online. We examine the use of faded worked examples and reading comprehension questions to understand proof. By breaking down a given proof, we formulate a checklist that can be used to generate comprehension questions which can be assessed automatically online. We then provide some…
Descriptors: Mathematics Instruction, Validity, Mathematical Logic, Evaluation Methods
Smith, Trevor I.; Bendjilali, Nasrine – Physical Review Physics Education Research, 2022
Several recent studies have employed item response theory (IRT) to rank incorrect responses to commonly used research-based multiple-choice assessments. These studies use Bock's nominal response model (NRM) for applying IRT to categorical (nondichotomous) data, but the response rankings only utilize half of the parameters estimated by the model.…
Descriptors: Item Response Theory, Test Items, Multiple Choice Tests, Science Tests
Tomkowicz, Joanna; Kim, Dong-In; Wan, Ping – Online Submission, 2022
In this study we evaluated the stability of item parameters and student scores, using the pre-equated (pre-pandemic) parameters from Spring 2019 and post-equated (post-pandemic) parameters from Spring 2021 in two calibration and equating designs related to item parameter treatment: re-estimating all anchor parameters (Design 1) and holding the…
Descriptors: Equated Scores, Test Items, Evaluation Methods, Pandemics
A Hybrid Approach for Automatic Generation of Named Entity Distractors for Multiple Choice Questions
Patra, Rakesh; Saha, Sujan Kumar – Education and Information Technologies, 2019
Assessment plays an important role in learning and Multiple Choice Questions (MCQs) are quite popular in large-scale evaluations. Technology-enabled learning necessitates a smart assessment. Therefore, automatic MCQ generation became increasingly popular in the last two decades. Despite a large amount of research effort, system generated MCQs are…
Descriptors: Multiple Choice Tests, High Stakes Tests, Semantics, Evaluation Methods
Smith, Mark; Breakstone, Joel; Wineburg, Sam – Cognition and Instruction, 2019
This article reports a validity study of History Assessments of Thinking (HATs), which are short, constructed-response assessments of historical thinking. In particular, this study focuses on aspects of cognitive validity, which is an examination of whether assessments tap the intended constructs. Think-aloud interviews with 26 high school…
Descriptors: History, History Instruction, Thinking Skills, Multiple Choice Tests
Malec, Wojciech; Krzeminska-Adamek, Malgorzata – Practical Assessment, Research & Evaluation, 2020
The main objective of the article is to compare several methods of evaluating multiple-choice options through classical item analysis. The methods subjected to examination include the tabulation of choice distribution, the interpretation of trace lines, the point-biserial correlation, the categorical analysis of trace lines, and the investigation…
Descriptors: Comparative Analysis, Evaluation Methods, Multiple Choice Tests, Item Analysis
Koskey, Kristin L. K.; Makki, Nidaa; Ahmed, Wondimu; Garafolo, Nicholas G.; Visco, Donald P., Jr. – School Science and Mathematics, 2020
Integrating engineering into the K-12 science curriculum continues to be a focus in national reform efforts in science education. Although there is an increasing interest in research in and practice of integrating engineering in K-12 science education, to date only a few studies have focused on the development of an assessment tool to measure…
Descriptors: Middle School Students, Engineering, Design, Science Education
Burfitt, Joan – Mathematics Education Research Group of Australasia, 2017
Multiple-choice items are used in large-scale assessments of mathematical achievement for secondary students in many countries. Research findings can be implemented to improve the quality of the items and hence increase the amount of information gathered about student learning from each item. One way to achieve this is to create items for which…
Descriptors: Multiple Choice Tests, Mathematics Tests, Credits, Knowledge Level
Gusev, Marjan; Ristov, Sasko; Armenski, Goce – International Journal of Distance Education Technologies, 2016
Recent technology trends evolved the student assessment from traditional ones ("pen-and-paper" and "face-to-face") to modern e-Assessment system. These modern approaches allow the teachers to conduct and evaluate an exam with huge number of students in a short period of time. Even more important, both the teacher and the…
Descriptors: Educational Technology, Technology Uses in Education, Computer Assisted Testing, Evaluation Methods
Ganzfried, Sam; Yusuf, Farzana – Education Sciences, 2018
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were…
Descriptors: Weighted Scores, Test Construction, Student Evaluation, Multiple Choice Tests