Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 10 |
Descriptor
Multiple Choice Tests | 10 |
Scoring Formulas | 10 |
Test Reliability | 4 |
Accuracy | 3 |
Comparative Analysis | 3 |
Psychometrics | 3 |
Scores | 3 |
Test Construction | 3 |
Anatomy | 2 |
Correlation | 2 |
Difficulty Level | 2 |
More ▼ |
Source
Author
Bauer, Daniel | 1 |
Bulut, Okan | 1 |
Cirillo, Pier F. | 1 |
Ferreira, Maria Amélia | 1 |
Fischer, Martin R. | 1 |
Gaio, A. Rita | 1 |
Gierl, Mark J. | 1 |
Guo, Qi | 1 |
Guttormsen, Sissel | 1 |
Hortsch, Michael | 1 |
Huwendiek, Sören | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 9 |
Information Analyses | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 2 |
High Schools | 1 |
Audience
Location
Czech Republic | 1 |
Ireland (Dublin) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Kacprzyk, Joanna; Parsons, Martin; Maguire, Patricia B.; Stewart, Gavin S. – Irish Educational Studies, 2019
The optimum assessment structure measures student knowledge accurately and without bias. In this study, the performance of the first-year undergraduate science students from the University College Dublin was evaluated to test the gender equality of the assessment structure in place. Results of male and female students taking three life science…
Descriptors: Science Tests, Gender Bias, College Freshmen, Foreign Countries
Lahner, Felicitas-Maria; Lörwald, Andrea Carolin; Bauer, Daniel; Nouns, Zineb Miriam; Krebs, René; Guttormsen, Sissel; Fischer, Martin R.; Huwendiek, Sören – Advances in Health Sciences Education, 2018
Multiple true-false (MTF) items are a widely used supplement to the commonly used single-best answer (Type A) multiple choice format. However, an optimal scoring algorithm for MTF items has not yet been established, as existing studies yielded conflicting results. Therefore, this study analyzes two questions: What is the optimal scoring algorithm…
Descriptors: Scoring Formulas, Scoring Rubrics, Objective Tests, Multiple Choice Tests
Gierl, Mark J.; Bulut, Okan; Guo, Qi; Zhang, Xinxin – Review of Educational Research, 2017
Multiple-choice testing is considered one of the most effective and enduring forms of educational assessment that remains in practice today. This study presents a comprehensive review of the literature on multiple-choice testing in education focused, specifically, on the development, analysis, and use of the incorrect options, which are also…
Descriptors: Multiple Choice Tests, Difficulty Level, Accuracy, Error Patterns
Severo, Milton; Gaio, A. Rita; Povo, Ana; Silva-Pereira, Fernanda; Ferreira, Maria Amélia – Anatomical Sciences Education, 2015
In theory the formula scoring methods increase the reliability of multiple-choice tests in comparison with number-right scoring. This study aimed to evaluate the impact of the formula scoring method in clinical anatomy multiple-choice examinations, and to compare it with that from the number-right scoring method, hoping to achieve an…
Descriptors: Anatomy, Multiple Choice Tests, Scoring, Decision Making
Zaidi, Nikki B.; Hwang, Charles; Scott, Sara; Stallard, Stefanie; Purkiss, Joel; Hortsch, Michael – Anatomical Sciences Education, 2017
Bloom's taxonomy was adopted to create a subject-specific scoring tool for histology multiple-choice questions (MCQs). This Bloom's Taxonomy Histology Tool (BTHT) was used to analyze teacher- and student-generated quiz and examination questions from a graduate level histology course. Multiple-choice questions using histological images were…
Descriptors: Taxonomy, Anatomy, Graduate Students, Scoring Formulas
Jancarík, Antonín; Kostelecká, Yvona – Electronic Journal of e-Learning, 2015
Electronic testing has become a regular part of online courses. Most learning management systems offer a wide range of tools that can be used in electronic tests. With respect to time demands, the most efficient tools are those that allow automatic assessment. The presented paper focuses on one of these tools: matching questions in which one…
Descriptors: Online Courses, Computer Assisted Testing, Test Items, Scoring Formulas
Van Hecke, Tanja – Teaching Mathematics and Its Applications, 2015
Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…
Descriptors: Multiple Choice Tests, Error Correction, Grading, Evaluation Methods
Merrel, Jeremy D.; Cirillo, Pier F.; Schwartz, Pauline M.; Webb, Jeffrey A. – Higher Education Studies, 2015
Multiple choice testing is a common but often ineffective method for evaluating learning. A newer approach, however, using Immediate Feedback Assessment Technique (IF AT®, Epstein Educational Enterprise, Inc.) forms, offers several advantages. In particular, a student learns immediately if his or her answer is correct and, in the case of an…
Descriptors: Multiple Choice Tests, Feedback (Response), Evaluation Methods, Guessing (Tests)
Stewart, Jeffrey; White, David A. – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2011
Multiple-choice tests such as the Vocabulary Levels Test (VLT) are often viewed as a preferable estimator of vocabulary knowledge when compared to yes/no checklists, because self-reporting tests introduce the possibility of students overreporting or underreporting scores. However, multiple-choice tests have their own unique disadvantages. It has…
Descriptors: Guessing (Tests), Scoring Formulas, Multiple Choice Tests, Test Reliability
Kobrin, Jennifer L.; Kimmel, Ernest W. – College Board, 2006
Based on statistics from the first few administrations of the SAT writing section, the test is performing as expected. The reliability of the writing section is very similar to that of other writing assessments. Based on preliminary validity research, the writing section is expected to add modestly to the prediction of college performance when…
Descriptors: Test Construction, Writing Tests, Cognitive Tests, College Entrance Examinations