NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)5
Since 2007 (last 20 years)11
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 48 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Little, Jeri L.; Frickey, Elise A.; Fung, Alexandra K. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2019
Taking a test improves memory for that tested information, a finding referred to as the testing effect. Multiple-choice tests tend to produce smaller testing effects than do cued-recall tests, and this result is largely attributed to the different processing that the two formats are assumed to induce. Specifically, it is generally assumed that the…
Descriptors: Multiple Choice Tests, Memory, Cognitive Processes, Recall (Psychology)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Vera Frith; Robert N. Prince – Numeracy, 2018
The National Benchmark Test Project (NBTP) was commissioned by Higher Education South Africa in 2005 to assess the academic proficiency of prospective students. The competencies assessed include quantitative literacy using the NBTP QL test. This instrument is a criterion-referenced multiple-choice test developed collaboratively by South African…
Descriptors: National Competency Tests, Numeracy, Mathematics Tests, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Teneqexhi, Romeo; Qirko, Margarita; Sharko, Genci; Vrapi, Fatmir; Kuneshka, Loreta – International Association for Development of the Information Society, 2017
Exams assessment is one of the most tedious work for university teachers all over the world. Multiple choice theses make exams assessment a little bit easier, but the teacher cannot prepare more than 3-4 variants; in this case, the possibility of students for cheating from one another becomes a risk for "objective assessment outcome." On…
Descriptors: Testing, Computer Assisted Testing, Test Items, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Haro, Elizabeth K.; Haro, Luis S. – Journal of Chemical Education, 2014
The multiple-choice question (MCQ) is the foundation of knowledge assessment in K-12, higher education, and standardized entrance exams (including the GRE, MCAT, and DAT). However, standard MCQ exams are limited with respect to the types of questions that can be asked when there are only five choices. MCQs offering additional choices more…
Descriptors: Multiple Choice Tests, Coding, Scoring Rubrics, Test Scoring Machines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gorbunova, Tatiana N. – European Journal of Contemporary Education, 2017
The subject of the research is to build methodologies to evaluate the student knowledge by testing. The author points to the importance of feedback about the mastering level in the learning process. Testing is considered as a tool. The object of the study is to create the test system models for defence practice problems. Special attention is paid…
Descriptors: Testing, Evaluation Methods, Feedback (Response), Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Bolt, Daniel M.; Wollack, James A.; Suh, Youngsuk – Psychometrika, 2012
Nested logit models have been presented as an alternative to multinomial logistic models for multiple-choice test items (Suh and Bolt in "Psychometrika" 75:454-473, 2010) and possess a mathematical structure that naturally lends itself to evaluating the incremental information provided by attending to distractor selection in scoring. One potential…
Descriptors: Test Items, Multiple Choice Tests, Models, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Merrel, Jeremy D.; Cirillo, Pier F.; Schwartz, Pauline M.; Webb, Jeffrey A. – Higher Education Studies, 2015
Multiple choice testing is a common but often ineffective method for evaluating learning. A newer approach, however, using Immediate Feedback Assessment Technique (IF AT®, Epstein Educational Enterprise, Inc.) forms, offers several advantages. In particular, a student learns immediately if his or her answer is correct and, in the case of an…
Descriptors: Multiple Choice Tests, Feedback (Response), Evaluation Methods, Guessing (Tests)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nakayama, Minoru; Yamamoto, Hiroh; Santiago, Rowena – Electronic Journal of e-Learning, 2010
e-Learning has some restrictions on how learning performance is assessed. Online testing is usually in the form of multiple-choice questions, without any essay type of learning assessment. Major reasons for employing multiple-choice tasks in e-learning include ease of implementation and ease of managing learner's responses. To address this…
Descriptors: Electronic Learning, Testing, Essay Tests, Online Courses
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam – Applied Measurement in Education, 2009
The purpose of this study is to determine the extent of scale drift on a test that employs cut scores. It was essential to examine scale drift for this testing program because new forms in this testing program are often put on scale through a series of intermediate equatings (known as equating chains). This process may cause equating error to…
Descriptors: Testing Programs, Testing, Measurement Techniques, Item Response Theory
Livingston, Samuel A. – Educational Testing Service, 2009
To many people, standardized testing means multiple-choice testing. However, some tests contain questions that require the test taker to produce the answer, rather than simply choosing it from a list. The required response can be as simple as the writing of a single word as complex as the design of a laboratory experiment to test a scientific…
Descriptors: Testing, Standardized Tests, Multiple Choice Tests, Laboratory Experiments
Peer reviewed Peer reviewed
Lord, Frederic M. – Journal of Educational Measurement, 1975
The assumption that examinees either know the answer to a test item or else guess at random is usually totally implausible. A different assumption is outlined, under which formula scoring is found to be clearly superior to number right scoring. (Author)
Descriptors: Guessing (Tests), Multiple Choice Tests, Response Style (Tests), Scoring
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Psychological Measurement, 1980
Six scoring methods for assigning weights to right or wrong responses according to various instructions given to test takers are analyzed with respect to expected change scores and the effect of various levels of information and misinformation. Three of the methods provide feedback to the test taker. (Author/CTM)
Descriptors: Guessing (Tests), Knowledge Level, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Essex, Diane L. – Journal of Medical Education, 1976
Two multiple-choice scoring schemes--a partial credit scheme and a dichotomous approach--were compared analyzing means, variances, and reliabilities on alternate measures and student reactions. Students preferred the partial-credit approach, which is recommended if rewarding for partial knowledge is an important concern. (Editor/JT)
Descriptors: Higher Education, Medical Students, Multiple Choice Tests, Reliability
Boldt, Robert F. – 1974
One formulation of confidence scoring requires the examinee to indicate as a number his personal probability of the correctness of each alternative in a multiple-choice test. For this formulation a linear transformation of the logarithm of the correct response is maximized if the examinee accurately reports his personal probability. To equate…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Probability
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4