NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 68 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herwin, Herwin; Pristiwaluyo, Triyanto; Ruslan, Ruslan; Dahalan, Shakila Che – Cypriot Journal of Educational Sciences, 2022
The application of multiple-choice tests often does not consider the scoring technique and the number of choices. The study aims at describing the effect of the scoring technique and numerous options towards the reliability of multiple-choice objective tests on social subjects in elementary school. The study is quantitative research with…
Descriptors: Scoring, Multiple Choice Tests, Test Reliability, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yu; Chiu, Chia-Yi; Köhn, Hans Friedrich – Journal of Educational and Behavioral Statistics, 2023
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Nonparametric Statistics, Test Format, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Liao, Ray J. T. – Language Testing, 2023
Among the variety of selected response formats used in L2 reading assessment, multiple-choice (MC) is the most commonly adopted, primarily due to its efficiency and objectiveness. Given the impact of assessment results on teaching and learning, it is necessary to investigate the degree to which the MC format reliably measures learners' L2 reading…
Descriptors: Reading Tests, Language Tests, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fitria Lafifa; Dadan Rosana – Turkish Online Journal of Distance Education, 2024
This research goal to develop a multiple-choice closed-ended test to assessing and evaluate students' digital literacy skills. The sample in this study were students at MTsN 1 Blitar City who were selected using a purposive sampling technique. The test was also validated by experts, namely 2 Doctors of Physics and Science from Yogyakarta State…
Descriptors: Educational Innovation, Student Evaluation, Digital Literacy, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Papenberg, Martin; Diedenhofen, Birk; Musch, Jochen – Journal of Experimental Education, 2021
Testwiseness may introduce construct-irrelevant variance to multiple-choice test scores. Presenting response options sequentially has been proposed as a potential solution to this problem. In an experimental validation, we determined the psychometric properties of a test based on the sequential presentation of response options. We created a strong…
Descriptors: Test Wiseness, Test Validity, Test Reliability, Multiple Choice Tests
New York State Education Department, 2024
The New York State Education Department (NYSED) has a partnership with NWEA for the development of the 2024 Grades 3-8 English Language Arts Tests. Teachers from across the State work with NYSED in a variety of activities to ensure the validity and reliability of the New York State Testing Program (NYSTP). The 2024 Grades 6 and 7 English Language…
Descriptors: Language Tests, Test Format, Language Arts, English Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Otoyo, Lucia; Bush, Martin – Practical Assessment, Research & Evaluation, 2018
This article presents the results of an empirical study of "subset selection" tests, which are a generalisation of traditional multiple-choice tests in which test takers are able to express partial knowledge. Similar previous studies have mostly been supportive of subset selection, but the deduction of marks for incorrect responses has…
Descriptors: Multiple Choice Tests, Grading, Test Reliability, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Atalmis, Erkan Hasan – International Journal of Assessment Tools in Education, 2018
Although multiple-choice items (MCIs) are widely used for classroom assessment, designing MCIs with sufficient number of plausible distracters is very challenging for teachers. In this regard, previous empirical studies reveal that using three-option MCIs provides various advantages when compared to four-option MCIs due to less preparation and…
Descriptors: Multiple Choice Tests, Test Items, Difficulty Level, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lahner, Felicitas-Maria; Lörwald, Andrea Carolin; Bauer, Daniel; Nouns, Zineb Miriam; Krebs, René; Guttormsen, Sissel; Fischer, Martin R.; Huwendiek, Sören – Advances in Health Sciences Education, 2018
Multiple true-false (MTF) items are a widely used supplement to the commonly used single-best answer (Type A) multiple choice format. However, an optimal scoring algorithm for MTF items has not yet been established, as existing studies yielded conflicting results. Therefore, this study analyzes two questions: What is the optimal scoring algorithm…
Descriptors: Scoring Formulas, Scoring Rubrics, Objective Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Polat, Murat – Novitas-ROYAL (Research on Youth and Language), 2020
Classroom practices, materials and teaching methods in language classes have changed a lot in the last decades and continue to evolve; however, the commonly used techniques to test students' foreign language skills have not changed much regardless of the recent awareness in Bloom's taxonomy. Testing units at schools rely mostly on multiple choice…
Descriptors: Multiple Choice Tests, Test Format, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Bush, Martin – Assessment & Evaluation in Higher Education, 2015
The humble multiple-choice test is very widely used within education at all levels, but its susceptibility to guesswork makes it a suboptimal assessment tool. The reliability of a multiple-choice test is partly governed by the number of items it contains; however, longer tests are more time consuming to take, and for some subject areas, it can be…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Format, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ali, Syed Haris; Carr, Patrick A.; Ruit, Kenneth G. – Journal of the Scholarship of Teaching and Learning, 2016
Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Freeresponse (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical…
Descriptors: Scores, Multiple Choice Tests, Test Reliability, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tarun, Prashant; Krueger, Dale – Journal of Learning in Higher Education, 2016
In the United States System of Education the growth of student evaluations from 1973 to 1993 has increased from 29% to 86% which in turn has increased the importance of student evaluations on faculty retention, tenure, and promotion. However, the impact student evaluations have had on student academic development generates complex educational…
Descriptors: Critical Thinking, Teaching Methods, Multiple Choice Tests, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Schwichow, Martin; Christoph, Simon; Boone, William J.; Härtig, Hendrik – International Journal of Science Education, 2016
The so-called control-of-variables strategy (CVS) incorporates the important scientific reasoning skills of designing controlled experiments and interpreting experimental outcomes. As CVS is a prominent component of science standards appropriate assessment instruments are required to measure these scientific reasoning skills and to evaluate the…
Descriptors: Thinking Skills, Science Instruction, Science Experiments, Science Tests
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5