NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)4
Audience
Researchers5
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 100 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kingsbury, G. Gage – Journal of Educational Measurement, 2016
This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For the data from an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent…
Descriptors: Achievement Tests, Student Motivation, Test Wiseness, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Merrel, Jeremy D.; Cirillo, Pier F.; Schwartz, Pauline M.; Webb, Jeffrey A. – Higher Education Studies, 2015
Multiple choice testing is a common but often ineffective method for evaluating learning. A newer approach, however, using Immediate Feedback Assessment Technique (IF AT®, Epstein Educational Enterprise, Inc.) forms, offers several advantages. In particular, a student learns immediately if his or her answer is correct and, in the case of an…
Descriptors: Multiple Choice Tests, Feedback (Response), Evaluation Methods, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Holster, Trevor A.; Lake, J. – Language Assessment Quarterly, 2016
Stewart questioned Beglar's use of Rasch analysis of the Vocabulary Size Test (VST) and advocated the use of 3-parameter logistic item response theory (3PLIRT) on the basis that it models a non-zero lower asymptote for items, often called a "guessing" parameter. In support of this theory, Stewart presented fit statistics derived from…
Descriptors: Guessing (Tests), Item Response Theory, Vocabulary, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Stewart, Jeffrey; White, David A. – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2011
Multiple-choice tests such as the Vocabulary Levels Test (VLT) are often viewed as a preferable estimator of vocabulary knowledge when compared to yes/no checklists, because self-reporting tests introduce the possibility of students overreporting or underreporting scores. However, multiple-choice tests have their own unique disadvantages. It has…
Descriptors: Guessing (Tests), Scoring Formulas, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Boldt, R. R. – Journal of Educational and Psychological Measurement, 1974
Descriptors: Confidence Testing, Guessing (Tests), Scoring Formulas, Testing Problems
Peer reviewed Peer reviewed
Hutchinson, T. P. – Contemporary Educational Psychology, 1980
In scoring multiple-choice tests, a score of 1 is given to right answers, 0 to unanswered questions, and some negative score to wrong answers. This paper discusses the relation of this negative score to the assumption made about the partial knowledge with the subjects may have. (Author/GDC)
Descriptors: Guessing (Tests), Knowledge Level, Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Albanese, Mark A. – Journal of Educational Measurement, 1988
Estimates of the effects of use of formula scoring on the individual examinee's score are presented. Results for easy, moderate, and hard tests are examined. Using test characteristics from several studies shows that some examinees would increase scores substantially if they were to answer items omitted under formula directions. (SLD)
Descriptors: Difficulty Level, Guessing (Tests), Scores, Scoring Formulas
Peer reviewed Peer reviewed
Wilcox, Rand R. – Journal of Educational Measurement, 1982
A new model for measuring misinformation is suggested. A modification of Wilcox's strong true-score model, to be used in certain situations, is indicated, since it solves the problem of correcting for guessing without assuming guessing is random. (Author/GK)
Descriptors: Achievement Tests, Guessing (Tests), Mathematical Models, Scoring Formulas
Peer reviewed Peer reviewed
Hsu, Louis M. – Educational and Psychological Measurement, 1979
Though the Paired-Item-Score (Eakin and Long) (EJ 174 780) method of scoring true-false tests has certain advantages over the traditional scoring methods (percentage right and right minus wrong), these advantages are attained at the cost of a larger risk of misranking the examinees. (Author/BW)
Descriptors: Comparative Analysis, Guessing (Tests), Objective Tests, Probability
Peer reviewed Peer reviewed
Hamdan, M. A. – Journal of Experimental Education, 1979
The distribution theory underlying corrections for guessing is analyzed, and the probability distributions of the random variables are derived. The correction in grade, based on random guessing of unknown answers, is compared with corrections based on educated guessing. (Author/MH)
Descriptors: Guessing (Tests), Maximum Likelihood Statistics, Multiple Choice Tests, Probability
Peer reviewed Peer reviewed
Frary, Robert B. – Journal of Educational Measurement, 1989
Responses to a 50-item, 4-choice test were simulated for 1,000 examinees under conventional formula-scoring instructions. Based on 192 simulation runs, formula scores and expected formula scores were determined for each examinee allowing and not allowing for inappropriate omissions. (TJH)
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Multiple Choice Tests
Koplyay, Janos B.; And Others – 1972
The relationship between true ability (operationally defined as the number of items for which the examinee actually knew the correct answer) and the effects of guessing upon observed test variance was investigated. Three basic hypotheses were treated mathematically: there is no functional relationship between true ability and guessing success;…
Descriptors: Guessing (Tests), Predictor Variables, Probability, Scoring
Love, Gayle A. – 1987
In a review of relevant literature, it is argued that correction for guessing formulas should not be used. It is contended that such formulas correct for guessing that does not really exist in a noticeable amount, penalize those students who have low self-esteem and self-confidence, correct for errors that are not necessarily errors, benefit risk…
Descriptors: Guessing (Tests), Scoring Formulas, Self Esteem, Teacher Made Tests
Peer reviewed Peer reviewed
Waters, Brian K. – Journal of Educational Research, 1976
This pilot study compared two empirically-derived, option-weighting methods and the resultant effect on the reliability and validity of multiple choice test scores as compared with conventional rights-only scoring. (MM)
Descriptors: Guessing (Tests), Measurement, Multiple Choice Tests, Scoring
Peer reviewed Peer reviewed
Tallmadge, G. Kasten – Evaluation Review, 1982
Correction for guessing does not fulfill its intended function when test takers who have nothing to gain from scoring will respond randomly when they could have answered correctly had they tried. Raw scores underestimate abilities. If random guessing is more prevalent in the control group, correction for guessing inflates treatment effects.…
Descriptors: Guessing (Tests), Research Methodology, Research Problems, Responses
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7