NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 1 to 15 of 87 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Deng, Jiayi; Ihlenfeldt, Samuel D. – Educational Assessment, 2022
The present meta-analysis sought to quantify the average degree of aggregated test score distortion due to rapid guessing (RG). Included studies group-administered a low-stakes cognitive assessment, identified RG via response times, and reported the rate of examinees engaging in RG, the percentage of RG responses observed, and/or the degree of…
Descriptors: Guessing (Tests), Testing Problems, Scores, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Danielle R. Blazek; Jason T. Siegel – International Journal of Social Research Methodology, 2024
Social scientists have long agreed that satisficing behavior increases error and reduces the validity of survey data. There have been numerous reviews on detecting satisficing behavior, but preventing this behavior has received less attention. The current narrative review provides empirically supported guidance on preventing satisficing by…
Descriptors: Response Style (Tests), Responses, Reaction Time, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Stoeckel, Tim; McLean, Stuart; Nation, Paul – Studies in Second Language Acquisition, 2021
Two commonly used test types to assess vocabulary knowledge for the purpose of reading are size and levels tests. This article first reviews several frequently stated purposes of such tests (e.g., materials selection, tracking vocabulary growth) and provides a reasoned argument for the precision needed to serve such purposes. Then three sources of…
Descriptors: Vocabulary Development, Receptive Language, Written Language, Knowledge Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cesur, Kursat – Educational Policy Analysis and Strategic Research, 2019
Examinees' performances are assessed using a wide variety of different techniques. Multiple-choice (MC) tests are among the most frequently used ones. Nearly, all standardized achievement tests make use of MC test items and there is a variety of ways to score these tests. The study compares number right and liberal scoring (SAC) methods. Mixed…
Descriptors: Multiple Choice Tests, Scoring, Evaluation Methods, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Sarac, Merve; Loken, Eric – International Journal of Testing, 2023
This study is an exploratory analysis of examinee behavior in a large-scale language proficiency test. Despite a number-right scoring system with no penalty for guessing, we found that 16% of examinees omitted at least one answer and that women were more likely than men to omit answers. Item-response theory analyses treating the omitted responses…
Descriptors: English (Second Language), Language Proficiency, Language Tests, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Haladyna, Thomas M.; Rodriguez, Michael C.; Stevens, Craig – Applied Measurement in Education, 2019
The evidence is mounting regarding the guidance to employ more three-option multiple-choice items. From theoretical analyses, empirical results, and practical considerations, such items are of equal or higher quality than four- or five-option items, and more items can be administered to improve content coverage. This study looks at 58 tests,…
Descriptors: Multiple Choice Tests, Test Items, Testing Problems, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Hongwen; Rios, Joseph A.; Haberman, Shelby; Liu, Ou Lydia; Wang, Jing; Paek, Insu – Applied Measurement in Education, 2016
Unmotivated test takers using rapid guessing in item responses can affect validity studies and teacher and institution performance evaluation negatively, making it critical to identify these test takers. The authors propose a new nonparametric method for finding response-time thresholds for flagging item responses that result from rapid-guessing…
Descriptors: Guessing (Tests), Reaction Time, Nonparametric Statistics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis
Peer reviewed Peer reviewed
Boldt, R. R. – Journal of Educational and Psychological Measurement, 1974
Descriptors: Confidence Testing, Guessing (Tests), Scoring Formulas, Testing Problems
Peer reviewed Peer reviewed
Burton, Richard F. – Assessment & Evaluation in Higher Education, 2001
Item-discrimination indices are numbers calculated from test data that are used in assessing the effectiveness of individual test questions. This article asserts that the indices are so unreliable as to suggest that countless good questions may have been discarded over the years. It considers how the indices, and hence overall test reliability,…
Descriptors: Guessing (Tests), Item Analysis, Test Reliability, Testing Problems
Koplyay, Janos B.; And Others – 1972
The relationship between true ability (operationally defined as the number of items for which the examinee actually knew the correct answer) and the effects of guessing upon observed test variance was investigated. Three basic hypotheses were treated mathematically: there is no functional relationship between true ability and guessing success;…
Descriptors: Guessing (Tests), Predictor Variables, Probability, Scoring
Peer reviewed Peer reviewed
Sklakter, Malcolm J.; And Others – Journal of Educational Measurement, 1970
A learning program in test-taking skills is proposed as a method of decreasing errors of measurement and testing handicaps. (GS)
Descriptors: Behavior Change, Disadvantaged, Guessing (Tests), Programed Instructional Materials
Abu-Sayf, F. K. – Educational Technology, 1979
Compares methods of scoring multiple-choice tests and discusses right-number scoring, guessing, and omitted items. Test instructions and answer changing are addressed, and attempts to weight test items are reviewed. It is concluded that, since innovations in test scoring are not well-established, the number right method is most appropriate. (RAO)
Descriptors: Guessing (Tests), Multiple Choice Tests, Objective Tests, Scoring
Peer reviewed Peer reviewed
Tallmadge, G. Kasten – Evaluation Review, 1982
Correction for guessing does not fulfill its intended function when test takers who have nothing to gain from scoring will respond randomly when they could have answered correctly had they tried. Raw scores underestimate abilities. If random guessing is more prevalent in the control group, correction for guessing inflates treatment effects.…
Descriptors: Guessing (Tests), Research Methodology, Research Problems, Responses
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6