NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Wilcox, Rand R. – Journal of Educational Measurement, 1982
A new model for measuring misinformation is suggested. A modification of Wilcox's strong true-score model, to be used in certain situations, is indicated, since it solves the problem of correcting for guessing without assuming guessing is random. (Author/GK)
Descriptors: Achievement Tests, Guessing (Tests), Mathematical Models, Scoring Formulas
Marco, Gary L. – 1975
A method of interpolation has been derived that should be superior to linear interpolation in computing the percentile ranks of test scores for unimodal score distributions. The superiority of the logistic interpolation over the linear interpolation is most noticeable for distributions consisting of only a small number of score intervals (say…
Descriptors: Comparative Analysis, Intervals, Mathematical Models, Percentage
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1980
Technical problems in achievement testing associated with using latent structure models to estimate the probability of guessing correct responses by examinees is studied; also the lack of problems associated with using Wilcox's formula score. Maximum likelihood estimates are derived which may be applied when items are hierarchically related.…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Maximum Likelihood Statistics
Wilcox, Rand R. – 1979
In the past, several latent structure models have been proposed for handling problems associated with measuring the achievement of examinees. Typically, however, these models describe a specific examinee in terms of an item domain or they describe a few items in terms of a population of examinees. In this paper, a model is proposed which allows a…
Descriptors: Achievement Tests, Guessing (Tests), Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Garcia-Perez, Miguel A.; Frary, Robert B. – Applied Psychological Measurement, 1989
Simulation techniques were used to generate conventional test responses and track the proportion of alternatives examinees could classify independently before and after taking the test. Finite-state scores were compared with these actual values and with number-correct and formula scores. Finite-state scores proved useful. (TJH)
Descriptors: Comparative Analysis, Computer Simulation, Guessing (Tests), Mathematical Models
Wilcox, Rand R. – 1978
A mastery test is frequently described as follows: an examinee responds to n dichotomously scored test items. Depending upon the examinee's observed (number correct) score, a mastery decision is made and the examinee is advanced to the next level of instruction. Otherwise, a nonmastery decision is made and the examinee is given remedial work. This…
Descriptors: Comparative Analysis, Cutting Scores, Factor Analysis, Mastery Tests
Peer reviewed Peer reviewed
McGaw, Barry; Glass, Gene V. – American Educational Research Journal, 1980
There are difficulties in expressing effect sizes on a common metric when some studies use transformed scales to express group differences, or use factorial designs or covariance adjustments to obtain a reduced error term. A common metric on which effect sizes may be standardized is described. (Author/RL)
Descriptors: Control Groups, Error of Measurement, Mathematical Models, Research Problems
Peer reviewed Peer reviewed
Penfield, Douglas A.; Koffler, Stephen L. – Journal of Experimental Education, 1978
Three nonparametric alternatives to the parametric Bartlett test are presented for handling the K-sample equality of variance problem. The two-sample Siegel-Tukey test, Mood test, and Klotz test are extended to the multisample situation by Puri's methods. These K-sample scale tests are illustrated and compared. (Author/GDC)
Descriptors: Comparative Analysis, Guessing (Tests), Higher Education, Mathematical Models
Hutchinson, T. P. – 1984
One means of learning about the processes operating in a multiple choice test is to include some test items, called nonsense items, which have no correct answer. This paper compares two versions of a mathematical model of test performance to interpret test data that includes both genuine and nonsense items. One formula is based on the usual…
Descriptors: Foreign Countries, Guessing (Tests), Mathematical Models, Multiple Choice Tests
Divgi, D. R. – 1980
A method is proposed for providing an absolute, in contrast to comparative, evaluation of how well two tests are equated by transforming their raw scores into a particular common scale. The method is direct, not requiring creation of a standard for comparison; expresses its results in scaled rather than raw scores, and allows examination of the…
Descriptors: Equated Scores, Evaluation Criteria, Item Analysis, Latent Trait Theory
Cluxton, Sue Ellen; Mandeville, Garrett K. – 1979
A comparison was made between four different scoring procedures for the 45-item Reading Comprehension subtest, Level I, of the Comprehensive Test of Basic Skills, Form S, for a sample of 1,000 third grade students. These students were selected to have been among those who omitted from 3 to 22 of the 45 items. Another representative sample of 1,300…
Descriptors: Achievement Tests, Guessing (Tests), Item Sampling, Latent Trait Theory
Veitch, William R. – 1979
The one parameter latent trait theory of Georg Rasch has two assumptions: that student abilities can be measured on an equal interval scale, and that the success of a student with a given item is a function of student achievement and item difficulty. The grade four Michigan Educational Assessment Program reading test was designed to measure…
Descriptors: Cutting Scores, Educational Assessment, Intermediate Grades, Item Analysis
Wilcox, Rand R. – 1981
These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…
Descriptors: Achievement Tests, Criterion Referenced Tests, Guessing (Tests), Mathematical Models
PDF pending restoration PDF pending restoration
Kane, Michael T.; Moloney, James M. – 1976
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Previous Page | Next Page ยป
Pages: 1  |  2