NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stewart, Jeffrey; White, David A. – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2011
Multiple-choice tests such as the Vocabulary Levels Test (VLT) are often viewed as a preferable estimator of vocabulary knowledge when compared to yes/no checklists, because self-reporting tests introduce the possibility of students overreporting or underreporting scores. However, multiple-choice tests have their own unique disadvantages. It has…
Descriptors: Guessing (Tests), Scoring Formulas, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Reid, Frank J. – Journal of Economic Education, 1976
Examines the conventional scoring formula for multiple-choice tests and proposes an alternative scoring formula which takes into account the situation in which the student does not know the right answer but is able to eliminate one or more of the incorrect alternatives. (Author/AV)
Descriptors: Economics Education, Guessing (Tests), Higher Education, Multiple Choice Tests
Kane, Michael T.; Moloney, James M. – 1974
Gilman and Ferry have shown that when the student's score on a multiple choice test is the total number of responses necessary to get all items correct, substantial increases in reliability can occur. In contrast, similar procedures giving partial credit on multiple choice items have resulted in relatively small gains in reliability. The analysis…
Descriptors: Feedback, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Boldt, Robert F. – 1971
This paper presents the development of scoring functions for use in conjunction with standard multiple-choice items. In addition to the usual indication of the correct alternative, the examinee is to indicate his personal probability of the correctness of his response. Both linear and quadratic polynomial scoring functions are examined for…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Boldt, Robert F. – 1974
One formulation of confidence scoring requires the examinee to indicate as a number his personal probability of the correctness of each alternative in a multiple-choice test. For this formulation a linear transformation of the logarithm of the correct response is maximized if the examinee accurately reports his personal probability. To equate…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Probability
Peer reviewed Peer reviewed
Frary, Robert B.; Hutchinson, T.P. – Educational and Psychological Measurement, 1982
Alternate versions of Hutchinson's theory were compared, and one which implies the existence of partial knowledge was found to be better than one which implies that an appropriate measure of ability is obtained by applying the conventional correction for guessing. (Author/PN)
Descriptors: Guessing (Tests), Latent Trait Theory, Multiple Choice Tests, Scoring Formulas
Lowry, Stephen R. – 1977
The effects of luck and misinformation on ability of multiple-choice test scores to estimate examinee ability were investigated. Two measures of examinee ability were defined. Misinformation was shown to have little effect on ability of raw scores and a substantial effect on ability of corrected-for-guessing scores to estimate examinee ability.…
Descriptors: Ability, College Students, Guessing (Tests), Multiple Choice Tests
Cross, Lawrence H.; Frary, Robert B. – 1976
It has been demonstrated that corrected-for-guessing scores will be superior to number-right scores in providing estimates of examinee standing on the trait measured by a multiple-choice test, if it can be assumed that examinees can and will comply with the appropriate directions. The purpose of the present study was to test the validity of that…
Descriptors: Achievement Tests, Guessing (Tests), Individual Characteristics, Multiple Choice Tests
Cross, Lawrence H. – 1975
A novel scoring procedure was investigated in order to obtain scores from a conventional multiple-choice test that would be free of the guessing component or contain a known guessing component even though examinees were permitted to guess at will. Scores computed with the experimental procedure are based not only on the number of items answered…
Descriptors: Algebra, Comparative Analysis, Guessing (Tests), High Schools
Kobrin, Jennifer L.; Kimmel, Ernest W. – College Board, 2006
Based on statistics from the first few administrations of the SAT writing section, the test is performing as expected. The reliability of the writing section is very similar to that of other writing assessments. Based on preliminary validity research, the writing section is expected to add modestly to the prediction of college performance when…
Descriptors: Test Construction, Writing Tests, Cognitive Tests, College Entrance Examinations
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Powell, J. C. – 1979
The educational significance of wrong answers on multiple choice tests was investigated in over 4,000 subjects, aged 7 to 20. Gorham's Proverbs Test--which requires the interpretation of a proverb sentence--was administered and repeated five months later. Four questions were addressed: (1) what can the pattern of answer choice, across age, using…
Descriptors: Age Differences, Cognitive Development, Cognitive Processes, Elementary Secondary Education
Donlon, Thomas F. – 1975
This study empirically determined the optimizing weight to be applied to the Wrongs Total Score in scoring rubrics of the general form = R - kW, where S is the Score, R the Rights Total, k the weight and W the Wrongs Total, if reliability is to be maximized. As is well known, the traditional formula score rests on a theoretical framework which is…
Descriptors: Achievement Tests, Comparative Analysis, Guessing (Tests), Multiple Choice Tests
Sibley, William L. – 1974
The use of computers in areas of testing, selection, and placement processes for those in military services' training programs are viewed in this paper. Also discussed is a review of the motivational and theoretical foundation of admissible probability testing, the role of the computer in admissible probability testing, and the authors' experience…
Descriptors: Computer Oriented Programs, Computers, Interaction, Military Training
Shuford, Emir H., Jr.; Brown, Thomas A. – 1974
A student's choice of an answer to a test question is a coarse measure of his knowledge about the subject matter of the question. Much finer measurement might be achieved if the student were asked to estimate, for each possible answer, the probability that it is the correct one. Such a procedure could yield two classes of benefits: (a) students…
Descriptors: Bias, Computer Programs, Confidence Testing, Decision Making
Previous Page | Next Page »
Pages: 1  |  2