NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 53 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Deribo, Tobias; Kroehne, Ulf; Goldhammer, Frank – Journal of Educational Measurement, 2021
The increased availability of time-related information as a result of computer-based assessment has enabled new ways to measure test-taking engagement. One of these ways is to distinguish between solution and rapid guessing behavior. Prior research has recommended response-level filtering to deal with rapid guessing. Response-level filtering can…
Descriptors: Guessing (Tests), Models, Reaction Time, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kuhfeld, Megan R. – Journal of Educational Measurement, 2021
There has been a growing research interest in the identification and management of disengaged test taking, which poses a validity threat that is particularly prevalent with low-stakes tests. This study investigated effort-moderated (E-M) scoring, in which item responses classified as rapid guesses are identified and excluded from scoring. Using…
Descriptors: Scoring, Data Use, Response Style (Tests), Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Siu, Wai-Lok; Huang, Xiaoting – Journal of Educational Measurement, 2022
Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Item Response Theory, Attention
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Sora; Bolt, Daniel M. – Journal of Educational Measurement, 2018
Both the statistical and interpretational shortcomings of the three-parameter logistic (3PL) model in accommodating guessing effects on multiple-choice items are well documented. We consider the use of a residual heteroscedasticity (RH) model as an alternative, and compare its performance to the 3PL with real test data sets and through simulation…
Descriptors: Statistical Analysis, Models, Guessing (Tests), Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Drabinová, Adéla; Martinková, Patrícia – Journal of Educational Measurement, 2017
In this article we present a general approach not relying on item response theory models (non-IRT) to detect differential item functioning (DIF) in dichotomous items with presence of guessing. The proposed nonlinear regression (NLR) procedure for DIF detection is an extension of method based on logistic regression. As a non-IRT approach, NLR can…
Descriptors: Test Items, Regression (Statistics), Guessing (Tests), Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David; Marais, Ida – Journal of Educational Measurement, 2018
Even though guessing biases difficulty estimates as a function of item difficulty in the dichotomous Rasch model, assessment programs with tests which include multiple-choice items often construct scales using this model. Research has shown that when all items are multiple-choice, this bias can largely be eliminated. However, many assessments have…
Descriptors: Multiple Choice Tests, Test Items, Guessing (Tests), Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison; Smith, Elizabeth – Journal of Educational Measurement, 2018
Bayesian methods incorporate model parameter information prior to data collection. Eliciting information from content experts is an option, but has seen little implementation in Bayesian item response theory (IRT) modeling. This study aims to use ethical reasoning content experts to elicit prior information and incorporate this information into…
Descriptors: Item Response Theory, Bayesian Statistics, Ethics, Specialists
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kingsbury, G. Gage – Journal of Educational Measurement, 2016
This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For the data from an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent…
Descriptors: Achievement Tests, Student Motivation, Test Wiseness, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Falk, Carl F.; Cai, Li – Journal of Educational Measurement, 2016
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Descriptors: Item Response Theory, Guessing (Tests), Mathematics Tests, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Peer reviewed Peer reviewed
Mueller, Daniel J.; Wasser, Virginia – Journal of Educational Measurement, 1977
Eighteen studies of the effects of changing initial answers to objective test items are reviewed. While students throughout the total test score range tended to gain more points than they lost, higher scoring students gain more than did lower scoring students. Suggestions for further research are made. (Author/JKS)
Descriptors: Guessing (Tests), Literature Reviews, Multiple Choice Tests, Objective Tests
Peer reviewed Peer reviewed
Fischer, Frederick E. – Journal of Educational Measurement, 1970
The personalbiserial index is a correlation which measures the relationship between the difficulty of the items in a test for the person, as evidenced by this passes and failures, and the difficulty of the items as evidenced by group-determined item difficulties. Reliability and predictive validity are studiesstudied. (Author/RF)
Descriptors: Guessing (Tests), Item Analysis, Predictive Measurement, Predictor Variables
Peer reviewed Peer reviewed
Albanese, Mark A. – Journal of Educational Measurement, 1988
Estimates of the effects of use of formula scoring on the individual examinee's score are presented. Results for easy, moderate, and hard tests are examined. Using test characteristics from several studies shows that some examinees would increase scores substantially if they were to answer items omitted under formula directions. (SLD)
Descriptors: Difficulty Level, Guessing (Tests), Scores, Scoring Formulas
Peer reviewed Peer reviewed
Wilcox, Rand R. – Journal of Educational Measurement, 1982
A new model for measuring misinformation is suggested. A modification of Wilcox's strong true-score model, to be used in certain situations, is indicated, since it solves the problem of correcting for guessing without assuming guessing is random. (Author/GK)
Descriptors: Achievement Tests, Guessing (Tests), Mathematical Models, Scoring Formulas
Peer reviewed Peer reviewed
Donlon, Thomas F. – Journal of Educational Measurement, 1981
Scores within the chance range are differentiated, "uninterpretable" scores being those that demonstrate randomness (broadly defined) by failing to achieve typical levels of correlation with group-determined difficulty. The relevant literature is reviewed. Finally, randomness and uninterpretability are examined in light of the…
Descriptors: Difficulty Level, Guessing (Tests), Multiple Choice Tests, Scores
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4