Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 16 |
Since 2016 (last 10 years) | 54 |
Since 2006 (last 20 years) | 83 |
Descriptor
Guessing (Tests) | 282 |
Multiple Choice Tests | 282 |
Test Items | 86 |
Test Reliability | 74 |
Scoring Formulas | 69 |
Test Validity | 56 |
Response Style (Tests) | 52 |
Scores | 47 |
Scoring | 46 |
Testing Problems | 46 |
Higher Education | 45 |
More ▼ |
Source
Author
Frary, Robert B. | 16 |
Wilcox, Rand R. | 6 |
Burton, Richard F. | 4 |
Cross, Lawrence H. | 4 |
Andrich, David | 3 |
Boldt, Robert F. | 3 |
Choppin, Bruce | 3 |
Donlon, Thomas F. | 3 |
Ebel, Robert L. | 3 |
Hutchinson, T. P. | 3 |
Jacobs, Stanley S. | 3 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 8 |
Practitioners | 4 |
Teachers | 1 |
Location
Australia | 4 |
Canada | 3 |
China | 3 |
Germany | 3 |
Nigeria | 3 |
Turkey | 3 |
United Kingdom | 3 |
Israel | 2 |
Jordan | 2 |
Malaysia | 2 |
Netherlands | 2 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
Brian C. Leventhal; Dena Pastor – Educational and Psychological Measurement, 2024
Low-stakes test performance commonly reflects examinee ability and effort. Examinees exhibiting low effort may be identified through rapid guessing behavior throughout an assessment. There has been a plethora of methods proposed to adjust scores once rapid guesses have been identified, but these have been plagued by strong assumptions or the…
Descriptors: College Students, Guessing (Tests), Multiple Choice Tests, Item Response Theory
Jin, Kuan-Yu; Siu, Wai-Lok; Huang, Xiaoting – Journal of Educational Measurement, 2022
Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Item Response Theory, Attention
McGuire, Michael J. – International Journal for the Scholarship of Teaching and Learning, 2023
College students in a lower-division psychology course made metacognitive judgments by predicting and postdicting performance for true-false, multiple-choice, and fill-in-the-blank question sets on each of three exams. This study investigated which question format would result in the most accurate metacognitive judgments. Extending Koriat's (1997)…
Descriptors: Metacognition, Multiple Choice Tests, Accuracy, Test Format
van den Broek, Gesa S. E.; Gerritsen, Suzanne L.; Oomen, Iris T. J.; Velthoven, Eva; van Boxtel, Femke H. J.; Kester, Liesbeth; van Gog, Tamara – Journal of Educational Psychology, 2023
Multiple-choice questions (MCQs) are popular in vocabulary software because they can be scored automatically and are compatible with many input devices (e.g., touchscreens). Answering MCQs is beneficial for learning, especially when learners retrieve knowledge from memory to evaluate plausible answer alternatives. However, such retrieval may not…
Descriptors: Multiple Choice Tests, Vocabulary Development, Test Format, Cues
Jana Welling; Timo Gnambs; Claus H. Carstensen – Educational and Psychological Measurement, 2024
Disengaged responding poses a severe threat to the validity of educational large-scale assessments, because item responses from unmotivated test-takers do not reflect their actual ability. Existing identification approaches rely primarily on item response times, which bears the risk of misclassifying fast engaged or slow disengaged responses.…
Descriptors: Foreign Countries, College Students, Guessing (Tests), Multiple Choice Tests
Vuorre, Matti; Metcalfe, Janet – Metacognition and Learning, 2022
This article investigates the concern that assessment of metacognitive resolution (or relative accuracy--often evaluated by gamma correlations or signal detection theoretic measures such as d[subscript a]) is vulnerable to an artifact due to guessing that differentially impacts low as compared to high performers on tasks that involve…
Descriptors: Metacognition, Accuracy, Memory, Multiple Choice Tests
Waterbury, Glenn Thomas; DeMars, Christine E. – Educational Assessment, 2021
Vertical scaling is used to put tests of different difficulty onto a common metric. The Rasch model is often used to perform vertical scaling, despite its strict functional form. Few, if any, studies have examined anchor item choice when using the Rasch model to vertically scale data that do not fit the model. The purpose of this study was to…
Descriptors: Test Items, Equated Scores, Item Response Theory, Scaling
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Abu-Ghazalah, Rashid M.; Dubins, David N.; Poon, Gregory M. K. – Applied Measurement in Education, 2023
Multiple choice results are inherently probabilistic outcomes, as correct responses reflect a combination of knowledge and guessing, while incorrect responses additionally reflect blunder, a confidently committed mistake. To objectively resolve knowledge from responses in an MC test structure, we evaluated probabilistic models that explicitly…
Descriptors: Guessing (Tests), Multiple Choice Tests, Probability, Models
Coniam, David; Lee, Tony; Lampropoulou, Leda – English Language Teaching, 2021
This article explores the issue of identifying guessers -- with a specific focus on multiple-choice tests. Guessing has long been considered a problem due to the fact that it compromises validity. A test taker scoring higher than they should through guessing does not provide a picture of their actual ability. After an initial description of issues…
Descriptors: Language Tests, Guessing (Tests), English (Second Language), Second Language Learning
Lee, Sora; Bolt, Daniel M. – Journal of Educational Measurement, 2018
Both the statistical and interpretational shortcomings of the three-parameter logistic (3PL) model in accommodating guessing effects on multiple-choice items are well documented. We consider the use of a residual heteroscedasticity (RH) model as an alternative, and compare its performance to the 3PL with real test data sets and through simulation…
Descriptors: Statistical Analysis, Models, Guessing (Tests), Multiple Choice Tests
Jimoh, Mohammed Idris; Daramola, Dorcas Sola; Oladele, Jumoke Iyabode; Sheu, Adaramaja Lukman – Anatolian Journal of Education, 2020
The study investigated items that were prone to guessing in Senior School Certificate Examinations (SSCE) Economics multiple-choice tests among students in Kwara State, Nigeria. The 2016 West African Senior Secondary Certificate Examinations (WASSCE) and National Examinations Council (NECO) Economics multiple-choice test items were subjected to…
Descriptors: Foreign Countries, High School Students, Guessing (Tests), Test Items
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Papenberg, Martin; Diedenhofen, Birk; Musch, Jochen – Journal of Experimental Education, 2021
Testwiseness may introduce construct-irrelevant variance to multiple-choice test scores. Presenting response options sequentially has been proposed as a potential solution to this problem. In an experimental validation, we determined the psychometric properties of a test based on the sequential presentation of response options. We created a strong…
Descriptors: Test Wiseness, Test Validity, Test Reliability, Multiple Choice Tests
Cesur, Kursat – Educational Policy Analysis and Strategic Research, 2019
Examinees' performances are assessed using a wide variety of different techniques. Multiple-choice (MC) tests are among the most frequently used ones. Nearly, all standardized achievement tests make use of MC test items and there is a variety of ways to score these tests. The study compares number right and liberal scoring (SAC) methods. Mixed…
Descriptors: Multiple Choice Tests, Scoring, Evaluation Methods, Guessing (Tests)