Publication Date
In 2025 | 1 |
Since 2024 | 12 |
Since 2021 (last 5 years) | 66 |
Since 2016 (last 10 years) | 163 |
Since 2006 (last 20 years) | 287 |
Descriptor
Guessing (Tests) | 690 |
Multiple Choice Tests | 282 |
Test Items | 186 |
Item Response Theory | 118 |
Test Reliability | 116 |
Scores | 101 |
Scoring Formulas | 100 |
Test Validity | 97 |
Foreign Countries | 95 |
Difficulty Level | 93 |
Response Style (Tests) | 93 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Researchers | 22 |
Practitioners | 11 |
Teachers | 6 |
Students | 2 |
Administrators | 1 |
Parents | 1 |
Location
Australia | 8 |
United Kingdom | 6 |
Canada | 5 |
China | 5 |
Germany | 5 |
Florida | 4 |
Malaysia | 4 |
Nigeria | 4 |
Turkey | 4 |
California | 3 |
Cyprus | 3 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
van den Broek, Gesa S. E.; Gerritsen, Suzanne L.; Oomen, Iris T. J.; Velthoven, Eva; van Boxtel, Femke H. J.; Kester, Liesbeth; van Gog, Tamara – Journal of Educational Psychology, 2023
Multiple-choice questions (MCQs) are popular in vocabulary software because they can be scored automatically and are compatible with many input devices (e.g., touchscreens). Answering MCQs is beneficial for learning, especially when learners retrieve knowledge from memory to evaluate plausible answer alternatives. However, such retrieval may not…
Descriptors: Multiple Choice Tests, Vocabulary Development, Test Format, Cues
Rios, Joseph A.; Deng, Jiayi – Large-scale Assessments in Education, 2021
Background: In testing contexts that are predominately concerned with power, rapid guessing (RG) has the potential to undermine the validity of inferences made from educational assessments, as such responses are unreflective of the knowledge, skills, and abilities assessed. Given this concern, practitioners/researchers have utilized a multitude of…
Descriptors: Test Wiseness, Guessing (Tests), Reaction Time, Computer Assisted Testing
Bell, Raoul; Mieth, Laura; Buchner, Axel – Cognitive Research: Principles and Implications, 2022
Consumers are exposed to large amounts of advertising every day. One way to avoid being manipulated is to monitor the sources of persuasive messages. In the present study it was tested whether high exposure to advertising affects the memory and guessing processes underlying source attributions. Participants were exposed to high or low proportions…
Descriptors: Coping, Advertising, Information Sources, Memory
Nagy, Gabriel; Ulitzsch, Esther; Lindner, Marlit Annalena – Journal of Computer Assisted Learning, 2023
Background: Item response times in computerized assessments are frequently used to identify rapid guessing behaviour as a manifestation of response disengagement. However, non-rapid responses (i.e., with longer response times) are not necessarily engaged, which means that response-time-based procedures could overlook disengaged responses.…
Descriptors: Guessing (Tests), Academic Persistence, Learner Engagement, Computer Assisted Testing
Jana Welling; Timo Gnambs; Claus H. Carstensen – Educational and Psychological Measurement, 2024
Disengaged responding poses a severe threat to the validity of educational large-scale assessments, because item responses from unmotivated test-takers do not reflect their actual ability. Existing identification approaches rely primarily on item response times, which bears the risk of misclassifying fast engaged or slow disengaged responses.…
Descriptors: Foreign Countries, College Students, Guessing (Tests), Multiple Choice Tests
Wise, Steven L. – Applied Measurement in Education, 2019
The identification of rapid guessing is important to promote the validity of achievement test scores, particularly with low-stakes tests. Effective methods for identifying rapid guesses require reliable threshold methods that are also aligned with test taker behavior. Although several common threshold methods are based on rapid guessing response…
Descriptors: Guessing (Tests), Identification, Reaction Time, Reliability
Vuorre, Matti; Metcalfe, Janet – Metacognition and Learning, 2022
This article investigates the concern that assessment of metacognitive resolution (or relative accuracy--often evaluated by gamma correlations or signal detection theoretic measures such as d[subscript a]) is vulnerable to an artifact due to guessing that differentially impacts low as compared to high performers on tasks that involve…
Descriptors: Metacognition, Accuracy, Memory, Multiple Choice Tests
Hayat, Bahrul – Cogent Education, 2022
The purpose of this study comprises (1) calibrating the Basic Statistics Test for Indonesian undergraduate psychology students using the Rasch model, (2) testing the impact of adjustment for guessing on item parameters, person parameters, test reliability, and distribution of item difficulty and person ability, and (3) comparing person scores…
Descriptors: Guessing (Tests), Statistics Education, Undergraduate Students, Psychology
Stoeckel, Tim; McLean, Stuart; Nation, Paul – Studies in Second Language Acquisition, 2021
Two commonly used test types to assess vocabulary knowledge for the purpose of reading are size and levels tests. This article first reviews several frequently stated purposes of such tests (e.g., materials selection, tracking vocabulary growth) and provides a reasoned argument for the precision needed to serve such purposes. Then three sources of…
Descriptors: Vocabulary Development, Receptive Language, Written Language, Knowledge Level
Wise, Steven L.; Im, Sukkeun; Lee, Jay – Educational Assessment, 2021
This study investigated test-taking engagement on the Spring 2019 administration of a large-scale state summative assessment. Through the identification of rapid-guessing behavior -- which is a validated indicator of disengagement -- the percentage of Grade 8 test events with meaningful amounts of rapid guessing was 5.5% in mathematics, 6.7% in…
Descriptors: Accountability, Test Results, Guessing (Tests), Summative Evaluation
Waterbury, Glenn Thomas; DeMars, Christine E. – Educational Assessment, 2021
Vertical scaling is used to put tests of different difficulty onto a common metric. The Rasch model is often used to perform vertical scaling, despite its strict functional form. Few, if any, studies have examined anchor item choice when using the Rasch model to vertically scale data that do not fit the model. The purpose of this study was to…
Descriptors: Test Items, Equated Scores, Item Response Theory, Scaling
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Brian Strong; Paul Leeming – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2024
In recent years, there has been considerable interest in how to maximize learners' retention of multiword expressions. One technique that has been shown to be highly effective is the use of exercises such as those found in mainstream English as a second language textbooks. In the present study, we investigated how the execution of a gap-fill…
Descriptors: Language Acquisition, Vocabulary Development, Phrase Structure, Verbs
Read, John – Language Testing, 2023
Published work on vocabulary assessment has grown substantially in the last 10 years, but it is still somewhat outside the mainstream of the field. There has been a recent call for those developing vocabulary tests to apply professional standards to their work, especially in validating their instruments for specified purposes before releasing them…
Descriptors: Language Tests, Vocabulary Development, Second Language Learning, Test Format
Rios, Joseph A.; Soland, James – International Journal of Testing, 2022
The objective of the present study was to investigate item-, examinee-, and country-level correlates of rapid guessing (RG) in the context of the 2018 PISA science assessment. Analyzing data from 267,148 examinees across 71 countries showed that over 50% of examinees engaged in RG on an average proportion of one in 10 items. Descriptive…
Descriptors: Foreign Countries, International Assessment, Achievement Tests, Secondary School Students