NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…2
What Works Clearinghouse Rating
Showing 16 to 30 of 690 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kirk Vanacore; Ashish Gurung; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Gaming the system, characterized by attempting to progress through a learning activity without engaging in essential learning behaviors, remains a persistent problem in computer-based learning platforms. This paper examines a simple intervention to mitigate the harmful effects of gaming the system by evaluating the impact of immediate feedback on…
Descriptors: Outcomes of Education, Ethics, Student Behavior, Electronic Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Danielle R. Blazek; Jason T. Siegel – International Journal of Social Research Methodology, 2024
Social scientists have long agreed that satisficing behavior increases error and reduces the validity of survey data. There have been numerous reviews on detecting satisficing behavior, but preventing this behavior has received less attention. The current narrative review provides empirically supported guidance on preventing satisficing by…
Descriptors: Response Style (Tests), Responses, Reaction Time, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter – Educational Measurement: Issues and Practice, 2021
In the Bookmark standard-setting procedure, panelists are instructed to consider what examinees know rather than what they might attain by guessing; however, because examinees sometimes do guess, the procedure includes a correction for guessing. Like other corrections for guessing, the Bookmark's correction assumes that examinees either know the…
Descriptors: Guessing (Tests), Student Evaluation, Evaluation Methods, Standard Setting (Scoring)
Peer reviewed Peer reviewed
Direct linkDirect link
Brian C. Leventhal; Dena Pastor – Educational and Psychological Measurement, 2024
Low-stakes test performance commonly reflects examinee ability and effort. Examinees exhibiting low effort may be identified through rapid guessing behavior throughout an assessment. There has been a plethora of methods proposed to adjust scores once rapid guesses have been identified, but these have been plagued by strong assumptions or the…
Descriptors: College Students, Guessing (Tests), Multiple Choice Tests, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A. – Applied Measurement in Education, 2022
Testing programs are confronted with the decision of whether to report individual scores for examinees that have engaged in rapid guessing (RG). As noted by the "Standards for Educational and Psychological Testing," this decision should be based on a documented criterion that determines score exclusion. To this end, a number of heuristic…
Descriptors: Testing, Guessing (Tests), Academic Ability, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sideridis, Georgios; Alahmadi, Maisa – Journal of Intelligence, 2022
The goal of the present study was to extend earlier work on the estimation of person theta using maximum likelihood estimation in R by accounting for rapid guessing. This paper provides a modified R function that accommodates person thetas using the Rasch or 2PL models and implements corrections for the presence of rapid guessing or informed…
Descriptors: Guessing (Tests), Reaction Time, Item Response Theory, Aptitude Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020
This note raises caution that a finding of a marked pseudo-guessing parameter for an item within a three-parameter item response model could be spurious in a population with substantial unobserved heterogeneity. A numerical example is presented wherein each of two classes the two-parameter logistic model is used to generate the data on a…
Descriptors: Guessing (Tests), Item Response Theory, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kseniia Marcq; Johan Braeken – Educational Assessment, Evaluation and Accountability, 2024
Gender differences in item nonresponse are well-documented in high-stakes achievement tests, where female students are shown to omit more items than male students. These gender differences in item nonresponse are often linked to differential risk-taking strategies, with females being risk-averse and unwilling to guess on an item, even if it could…
Descriptors: Secondary School Students, International Assessment, Gender Differences, Response Rates (Questionnaires)
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Siu, Wai-Lok; Huang, Xiaoting – Journal of Educational Measurement, 2022
Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Item Response Theory, Attention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McGuire, Michael J. – International Journal for the Scholarship of Teaching and Learning, 2023
College students in a lower-division psychology course made metacognitive judgments by predicting and postdicting performance for true-false, multiple-choice, and fill-in-the-blank question sets on each of three exams. This study investigated which question format would result in the most accurate metacognitive judgments. Extending Koriat's (1997)…
Descriptors: Metacognition, Multiple Choice Tests, Accuracy, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Sachs, Rebecca; Hamrick, Phillip; McCormick, Timothy J.; Leow, Ronald P. – Studies in Second Language Acquisition, 2020
Subjective measures (SMs) of awareness assume (a) participants can accurately report the implicit/explicit status of their knowledge and (b) the act of reporting does not change that knowledge. However, SMs suffer from nonveridicality (e.g., overreporting of "guess" responses) and reactivity (e.g., prompting rule search). Attempting to…
Descriptors: Measures (Individuals), Measurement Techniques, Bias, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Applied Measurement in Education, 2020
In achievement testing there is typically a practical requirement that the set of items administered should be representative of some target content domain. This is accomplished by establishing test blueprints specifying the content constraints to be followed when selecting the items for a test. Sometimes, however, students give disengaged…
Descriptors: Test Items, Test Content, Achievement Tests, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Gustafsson, Martin; Barakat, Bilal Fouad – Comparative Education Review, 2023
International assessments inform education policy debates, yet little is known about their floor effects: To what extent do they fail to differentiate between the lowest performers, and what are the implications of this? TIMSS, SACMEQ, and LLECE data are analyzed to answer this question. In TIMSS, floor effects have been reduced through the…
Descriptors: Achievement Tests, Elementary Secondary Education, International Assessment, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Agus Santoso; Heri Retnawati; Timbul Pardede; Ibnu Rafi; Munaya Nikma Rosyada; Gulzhaina K. Kassymova; Xu Wenxin – Practical Assessment, Research & Evaluation, 2024
The test blueprint is important in test development, where it guides the test item writer in creating test items according to the desired objectives and specifications or characteristics (so-called a priori item characteristics), such as the level of item difficulty in the category and the distribution of items based on their difficulty level.…
Descriptors: Foreign Countries, Undergraduate Students, Business English, Test Construction
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  46