NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1981
A formal framework is presented for determining which of the distractors of multiple-choice test items has a small probability of being chosen by a typical examinee. The framework is based on a procedure similar to an indifference zone formulation of a ranking and election problem. (Author/BW)
Descriptors: Mathematical Models, Multiple Choice Tests, Probability, Test Items
Peer reviewed Peer reviewed
Wilcox, Rand R. – Journal of Educational Measurement, 1987
Four procedures are discussed for obtaining a confidence interval when answer-until-correct scoring is used in multiple choice tests. Simulated data show that the choice of procedure depends upon sample size. (GDC)
Descriptors: Computer Simulation, Multiple Choice Tests, Sample Size, Scoring
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1982
Results in the engineering literature on "k out of n system reliability" can be used to characterize tests based on estimates of the probability of correctly determining whether the examinee knows the correct response. In particular, the minimum number of distractors required for multiple-choice tests can be empirically determined.…
Descriptors: Achievement Tests, Mathematical Models, Multiple Choice Tests, Test Format
Wilcox, Rand R. – 1979
In the past, several latent structure models have been proposed for handling problems associated with measuring the achievement of examinees. Typically, however, these models describe a specific examinee in terms of an item domain or they describe a few items in terms of a population of examinees. In this paper, a model is proposed which allows a…
Descriptors: Achievement Tests, Guessing (Tests), Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Wilcox, Rand R. – Psychometrika, 1983
A procedure for determining the reliability of an examinee knowing k out of n possible multiple choice items given his or her performance on those items is presented. Also, a scoring procedure for determining which items an examinee knows is presented. (Author/JKS)
Descriptors: Item Analysis, Latent Trait Theory, Measurement Techniques, Multiple Choice Tests
Peer reviewed Peer reviewed
Wilcox, Rand R.; Wilcox, Karen Thompson – Journal of Educational Measurement, 1988
Use of latent class models to examine strategies that examinees (92 college students) use for a specific task is illustrated, via a multiple-choice test of spatial ability. Under an answer-until-correct scoring procedure, models representing an improvement over simplistic random guessing are proposed. (SLD)
Descriptors: College Students, Decision Making, Guessing (Tests), Multiple Choice Tests
Peer reviewed Peer reviewed
Wilcox, Rand R. – Journal of Experimental Education, 1983
A latent class model for handling the items in Birenbaum and Tatsuoka's study is described. A method to derive the optimal scoring rule when multiple choice test items are used is illustrated. Remedial training begins after a determination is made as to which of several erroneous algorithms is being used. (Author/DWH)
Descriptors: Achievement Tests, Algorithms, Diagnostic Tests, Latent Trait Theory
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1982
When determining criterion-referenced test length, problems of guessing are shown to be more serious than expected. A new method of scoring is presented that corrects for guessing without assuming that guessing is random. Empirical investigations of the procedure are examined. Test length can be substantially reduced. (Author/CM)
Descriptors: Criterion Referenced Tests, Guessing (Tests), Multiple Choice Tests, Scoring
Peer reviewed Peer reviewed
Wilcox, Rand R.; And Others – Journal of Educational Measurement, 1988
The second response conditional probability model of decision-making strategies used by examinees answering multiple choice test items was revised. Increasing the number of distractors or providing distractors giving examinees (N=106) the option to follow the model improved results and gave a good fit to data for 29 of 30 items. (SLD)
Descriptors: Cognitive Tests, Decision Making, Mathematical Models, Multiple Choice Tests
Wilcox, Rand R. – 1981
These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…
Descriptors: Achievement Tests, Criterion Referenced Tests, Guessing (Tests), Mathematical Models
Wilcox, Rand R. – 1982
This document contains three papers from the Methodology Project of the Center for the Study of Evaluation. Methods for characterizing test accuracy are reported in the first two papers. "Bounds on the K Out of N Reliability of a Test, and an Exact Test for Hierarchically Related Items" describes and illustrates how an extension of a…
Descriptors: Educational Testing, Evaluation Methods, Guessing (Tests), Latent Trait Theory
Peer reviewed Peer reviewed
Wilcox, Rand R. – Journal of Experimental Education, 1982
A closed sequential procedure for estimating true score is proposed for use with answer-until-correct tests. The accuracy of determining true score is the same as in conventional sequential solutions, but the possibility of using an unnecessarily large number of items is eliminated. (Author/CM)
Descriptors: Answer Sheets, Guessing (Tests), Item Banks, Measurement Techniques