NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Van Hecke, Tanja – Teaching Mathematics and Its Applications, 2015
Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…
Descriptors: Multiple Choice Tests, Error Correction, Grading, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Stewart, Jeffrey; White, David A. – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2011
Multiple-choice tests such as the Vocabulary Levels Test (VLT) are often viewed as a preferable estimator of vocabulary knowledge when compared to yes/no checklists, because self-reporting tests introduce the possibility of students overreporting or underreporting scores. However, multiple-choice tests have their own unique disadvantages. It has…
Descriptors: Guessing (Tests), Scoring Formulas, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Hamdan, M. A. – Journal of Experimental Education, 1979
The distribution theory underlying corrections for guessing is analyzed, and the probability distributions of the random variables are derived. The correction in grade, based on random guessing of unknown answers, is compared with corrections based on educated guessing. (Author/MH)
Descriptors: Guessing (Tests), Maximum Likelihood Statistics, Multiple Choice Tests, Probability
Peer reviewed Peer reviewed
Hansen, Richard – Journal of Educational Measurement, 1971
The relationship between certain personality variables and the degree to which examines display certainty in their responses was investigated. (Author)
Descriptors: Guessing (Tests), Individual Characteristics, Multiple Choice Tests, Personality Assessment
Boldt, Robert F. – 1974
One formulation of confidence scoring requires the examinee to indicate as a number his personal probability of the correctness of each alternative in a multiple-choice test. For this formulation a linear transformation of the logarithm of the correct response is maximized if the examinee accurately reports his personal probability. To equate…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Probability
Suhadolnik, Debra; Weiss, David J. – 1983
The present study was an attempt to alleviate some of the difficulties inherent in multiple-choice items by having examinees respond to multiple-choice items in a probabilistic manner. Using this format, examinees are able to respond to each alternative and to provide indications of any partial knowledge they may possess concerning the item. The…
Descriptors: Confidence Testing, Multiple Choice Tests, Probability, Response Style (Tests)
Boldt, Robert F. – 1971
One formulation of confidence scoring requires the examinee to indicate as a number his personal probability of the correctness of each alternative in a multiple-choice test. For this formulation, a linear transformation of the logarithm of the correct response is maximized if the examinee reports accurately his personal probability. To equate…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Probability
PDF pending restoration PDF pending restoration
Kane, Michael T.; Moloney, James M. – 1976
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Sibley, William L. – 1974
The use of computers in areas of testing, selection, and placement processes for those in military services' training programs are viewed in this paper. Also discussed is a review of the motivational and theoretical foundation of admissible probability testing, the role of the computer in admissible probability testing, and the authors' experience…
Descriptors: Computer Oriented Programs, Computers, Interaction, Military Training
Shuford, Emir H., Jr.; Brown, Thomas A. – 1974
A student's choice of an answer to a test question is a coarse measure of his knowledge about the subject matter of the question. Much finer measurement might be achieved if the student were asked to estimate, for each possible answer, the probability that it is the correct one. Such a procedure could yield two classes of benefits: (a) students…
Descriptors: Bias, Computer Programs, Confidence Testing, Decision Making
Warm, Thomas A. – 1978
This primer is an introduction to item response theory (also called item characteristic curve theory, or latent trait theory) as it is used most commonly--for scoring multiple choice achievement or aptitude tests. Written for the testing practitioner with minimum training in statistics and psychometrics, it presents and illustrates the basic…
Descriptors: Ability Identification, Achievement Tests, Adaptive Testing, Aptitude Tests