Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 20 |
Descriptor
Source
Educational and Psychological… | 55 |
Author
DeMars, Christine E. | 2 |
Frary, Robert B. | 2 |
Hambleton, Ronald K. | 2 |
Jiayi Deng | 2 |
Joseph A. Rios | 2 |
Lord, Frederic M. | 2 |
Seo, Dong Gi | 2 |
Sideridis, Georgios | 2 |
Smith, Richard M. | 2 |
Traub, Ross E. | 2 |
Tsaousis, Ioannis | 2 |
More ▼ |
Publication Type
Journal Articles | 35 |
Reports - Research | 26 |
Reports - Evaluative | 9 |
Speeches/Meeting Papers | 2 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 5 |
Postsecondary Education | 5 |
High Schools | 2 |
Secondary Education | 1 |
Audience
Location
Germany | 2 |
Australia | 1 |
California | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Joseph A. Rios; Jiayi Deng – Educational and Psychological Measurement, 2024
Rapid guessing (RG) is a form of non-effortful responding that is characterized by short response latencies. This construct-irrelevant behavior has been shown in previous research to bias inferences concerning measurement properties and scores. To mitigate these deleterious effects, a number of response time threshold scoring procedures have been…
Descriptors: Reaction Time, Scores, Item Response Theory, Guessing (Tests)
Rios, Joseph A. – Educational and Psychological Measurement, 2022
The presence of rapid guessing (RG) presents a challenge to practitioners in obtaining accurate estimates of measurement properties and examinee ability. In response to this concern, researchers have utilized response times as a proxy of RG and have attempted to improve parameter estimation accuracy by filtering RG responses using popular scoring…
Descriptors: Guessing (Tests), Classification, Accuracy, Computation
Joseph A. Rios; Jiayi Deng – Educational and Psychological Measurement, 2025
To mitigate the potential damaging consequences of rapid guessing (RG), a form of noneffortful responding, researchers have proposed a number of scoring approaches. The present simulation study examines the robustness of the most popular of these approaches, the unidimensional effort-moderated (EM) scoring procedure, to multidimensional RG (i.e.,…
Descriptors: Scoring, Guessing (Tests), Reaction Time, Item Response Theory
Brian C. Leventhal; Dena Pastor – Educational and Psychological Measurement, 2024
Low-stakes test performance commonly reflects examinee ability and effort. Examinees exhibiting low effort may be identified through rapid guessing behavior throughout an assessment. There has been a plethora of methods proposed to adjust scores once rapid guesses have been identified, but these have been plagued by strong assumptions or the…
Descriptors: College Students, Guessing (Tests), Multiple Choice Tests, Item Response Theory
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020
This note raises caution that a finding of a marked pseudo-guessing parameter for an item within a three-parameter item response model could be spurious in a population with substantial unobserved heterogeneity. A numerical example is presented wherein each of two classes the two-parameter logistic model is used to generate the data on a…
Descriptors: Guessing (Tests), Item Response Theory, Test Items, Models
Jana Welling; Timo Gnambs; Claus H. Carstensen – Educational and Psychological Measurement, 2024
Disengaged responding poses a severe threat to the validity of educational large-scale assessments, because item responses from unmotivated test-takers do not reflect their actual ability. Existing identification approaches rely primarily on item response times, which bears the risk of misclassifying fast engaged or slow disengaged responses.…
Descriptors: Foreign Countries, College Students, Guessing (Tests), Multiple Choice Tests
Sideridis, Georgios; Tsaousis, Ioannis; Al-Harbi, Khaleel – Educational and Psychological Measurement, 2022
The goal of the present study was to address the analytical complexity of incorporating responses and response times through applying the Jeon and De Boeck mixture item response theory model in Mplus 8.7. Using both simulated and real data, we attempt to identify subgroups of responders that are rapid guessers or engage knowledge retrieval…
Descriptors: Reaction Time, Guessing (Tests), Item Response Theory, Information Retrieval
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Paek, Insu – Educational and Psychological Measurement, 2016
The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test…
Descriptors: Guessing (Tests), Computation, Statistical Analysis, Test Length
DeMars, Christine E.; Jurich, Daniel P. – Educational and Psychological Measurement, 2015
In educational testing, differential item functioning (DIF) statistics must be accurately estimated to ensure the appropriate items are flagged for inspection or removal. This study showed how using the Rasch model to estimate DIF may introduce considerable bias in the results when there are large group differences in ability (impact) and the data…
Descriptors: Test Bias, Guessing (Tests), Ability, Differences
Andrich, David; Marais, Ida; Humphry, Stephen Mark – Educational and Psychological Measurement, 2016
Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The…
Descriptors: Guessing (Tests), Statistical Bias, Item Response Theory, Multiple Choice Tests
Sideridis, Georgios; Tsaousis, Ioannis; Al Harbi, Khaleel – Educational and Psychological Measurement, 2017
The purpose of the present article was to illustrate, using an example from a national assessment, the value from analyzing the behavior of distractors in measures that engage the multiple-choice format. A secondary purpose of the present article was to illustrate four remedial actions that can potentially improve the measurement of the…
Descriptors: Multiple Choice Tests, Attention Control, Testing, Remedial Instruction
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2015
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…
Descriptors: Computer Assisted Testing, Adaptive Testing, Accuracy, Fidelity
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2013
The usefulness of the l[subscript z] person-fit index was investigated with achievement test data from 20 exams given to more than 3,200 college students. Results for three methods of estimating ? showed that the distributions of l[subscript z] were not consistent with its theoretical distribution, resulting in general overfit to the item response…
Descriptors: Achievement Tests, College Students, Goodness of Fit, Item Response Theory
Santelices, Maria Veronica; Wilson, Mark – Educational and Psychological Measurement, 2012
The relationship between differential item functioning (DIF) and item difficulty on the SAT is such that more difficult items tended to exhibit DIF in favor of the focal group (usually minority groups). These results were reported by Kulick and Hu, and Freedle and have been enthusiastically discussed by more recent literature. Examining the…
Descriptors: Test Bias, Test Items, Difficulty Level, Statistical Analysis