Publication Date
In 2025 | 1 |
Since 2024 | 8 |
Since 2021 (last 5 years) | 24 |
Since 2016 (last 10 years) | 51 |
Since 2006 (last 20 years) | 95 |
Descriptor
Guessing (Tests) | 118 |
Item Response Theory | 118 |
Test Items | 57 |
Models | 36 |
Difficulty Level | 35 |
Multiple Choice Tests | 33 |
Foreign Countries | 22 |
Simulation | 22 |
Reaction Time | 21 |
Computation | 19 |
Comparative Analysis | 18 |
More ▼ |
Source
Author
DeMars, Christine E. | 7 |
Wise, Steven L. | 6 |
Cai, Li | 3 |
Falk, Carl F. | 3 |
Jiayi Deng | 3 |
Rios, Joseph A. | 3 |
Schnipke, Deborah L. | 3 |
Sideridis, Georgios | 3 |
Andrich, David | 2 |
Chiu, Ting-Wei | 2 |
De Boeck, Paul | 2 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 1 |
Location
Germany | 2 |
United States | 2 |
Africa | 1 |
Australia | 1 |
Brazil | 1 |
California | 1 |
Chile | 1 |
China | 1 |
France | 1 |
Greece | 1 |
Hong Kong | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Joseph A. Rios; Jiayi Deng – Educational and Psychological Measurement, 2024
Rapid guessing (RG) is a form of non-effortful responding that is characterized by short response latencies. This construct-irrelevant behavior has been shown in previous research to bias inferences concerning measurement properties and scores. To mitigate these deleterious effects, a number of response time threshold scoring procedures have been…
Descriptors: Reaction Time, Scores, Item Response Theory, Guessing (Tests)
Xiangyi Liao; Daniel M Bolt – Educational Measurement: Issues and Practice, 2024
Traditional approaches to the modeling of multiple-choice item response data (e.g., 3PL, 4PL models) emphasize slips and guesses as random events. In this paper, an item response model is presented that characterizes both disjunctively interacting guessing and conjunctively interacting slipping processes as proficiency-related phenomena. We show…
Descriptors: Item Response Theory, Test Items, Error Correction, Guessing (Tests)
Yue Liu; Zhen Li; Hongyun Liu; Xiaofeng You – Applied Measurement in Education, 2024
Low test-taking effort of examinees has been considered a source of construct-irrelevant variance in item response modeling, leading to serious consequences on parameter estimation. This study aims to investigate how non-effortful response (NER) influences the estimation of item and person parameters in item-pool scale linking (IPSL) and whether…
Descriptors: Item Response Theory, Computation, Simulation, Responses
Rios, Joseph A. – Educational and Psychological Measurement, 2022
The presence of rapid guessing (RG) presents a challenge to practitioners in obtaining accurate estimates of measurement properties and examinee ability. In response to this concern, researchers have utilized response times as a proxy of RG and have attempted to improve parameter estimation accuracy by filtering RG responses using popular scoring…
Descriptors: Guessing (Tests), Classification, Accuracy, Computation
Jiayi Deng – ProQuest LLC, 2024
Test score comparability in international large-scale assessments (LSA) is of utmost importance in measuring the effectiveness of education systems and understanding the impact of education on economic growth. To effectively compare test scores on an international scale, score linking is widely used to convert raw scores from different linguistic…
Descriptors: Item Response Theory, Scoring Rubrics, Scoring, Error of Measurement
Antoniou, Faye; Alkhadim, Ghadah; Mouzaki, Angeliki; Simos, Panagiotis – Journal of Intelligence, 2022
The purpose of the present study was to evaluate the psychometric properties of Raven's colored progressive matrices by estimating the presence of pseudo-guessing and pseudo-carelessness. Participants were 1127 children from ages 5 to 11. Guessing and carelessness were assessed using the lower and upper asymptotes of the 3PL and 4PL item response…
Descriptors: Psychometrics, Intelligence Tests, Item Response Theory, Children
Joseph A. Rios; Jiayi Deng – Educational and Psychological Measurement, 2025
To mitigate the potential damaging consequences of rapid guessing (RG), a form of noneffortful responding, researchers have proposed a number of scoring approaches. The present simulation study examines the robustness of the most popular of these approaches, the unidimensional effort-moderated (EM) scoring procedure, to multidimensional RG (i.e.,…
Descriptors: Scoring, Guessing (Tests), Reaction Time, Item Response Theory
Rios, Joseph A.; Deng, Jiayi; Ihlenfeldt, Samuel D. – Educational Assessment, 2022
The present meta-analysis sought to quantify the average degree of aggregated test score distortion due to rapid guessing (RG). Included studies group-administered a low-stakes cognitive assessment, identified RG via response times, and reported the rate of examinees engaging in RG, the percentage of RG responses observed, and/or the degree of…
Descriptors: Guessing (Tests), Testing Problems, Scores, Item Response Theory
Brian C. Leventhal; Dena Pastor – Educational and Psychological Measurement, 2024
Low-stakes test performance commonly reflects examinee ability and effort. Examinees exhibiting low effort may be identified through rapid guessing behavior throughout an assessment. There has been a plethora of methods proposed to adjust scores once rapid guesses have been identified, but these have been plagued by strong assumptions or the…
Descriptors: College Students, Guessing (Tests), Multiple Choice Tests, Item Response Theory
Sideridis, Georgios; Alahmadi, Maisa – Journal of Intelligence, 2022
The goal of the present study was to extend earlier work on the estimation of person theta using maximum likelihood estimation in R by accounting for rapid guessing. This paper provides a modified R function that accommodates person thetas using the Rasch or 2PL models and implements corrections for the presence of rapid guessing or informed…
Descriptors: Guessing (Tests), Reaction Time, Item Response Theory, Aptitude Tests
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020
This note raises caution that a finding of a marked pseudo-guessing parameter for an item within a three-parameter item response model could be spurious in a population with substantial unobserved heterogeneity. A numerical example is presented wherein each of two classes the two-parameter logistic model is used to generate the data on a…
Descriptors: Guessing (Tests), Item Response Theory, Test Items, Models
Jin, Kuan-Yu; Siu, Wai-Lok; Huang, Xiaoting – Journal of Educational Measurement, 2022
Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Item Response Theory, Attention
Agus Santoso; Heri Retnawati; Timbul Pardede; Ibnu Rafi; Munaya Nikma Rosyada; Gulzhaina K. Kassymova; Xu Wenxin – Practical Assessment, Research & Evaluation, 2024
The test blueprint is important in test development, where it guides the test item writer in creating test items according to the desired objectives and specifications or characteristics (so-called a priori item characteristics), such as the level of item difficulty in the category and the distribution of items based on their difficulty level.…
Descriptors: Foreign Countries, Undergraduate Students, Business English, Test Construction
Nagy, Gabriel; Ulitzsch, Esther; Lindner, Marlit Annalena – Journal of Computer Assisted Learning, 2023
Background: Item response times in computerized assessments are frequently used to identify rapid guessing behaviour as a manifestation of response disengagement. However, non-rapid responses (i.e., with longer response times) are not necessarily engaged, which means that response-time-based procedures could overlook disengaged responses.…
Descriptors: Guessing (Tests), Academic Persistence, Learner Engagement, Computer Assisted Testing