Descriptor
Mathematical Models | 31 |
Scoring Formulas | 31 |
Guessing (Tests) | 11 |
Multiple Choice Tests | 10 |
Item Analysis | 9 |
Test Theory | 8 |
Latent Trait Theory | 7 |
Test Items | 7 |
Achievement Tests | 6 |
Research Reports | 6 |
Cutting Scores | 5 |
More ▼ |
Source
Applied Psychological… | 3 |
Journal of Educational… | 3 |
Educational and Psychological… | 2 |
American Educational Research… | 1 |
Evaluation in Education:… | 1 |
Journal of Experimental… | 1 |
Author
Wilcox, Rand R. | 6 |
Lord, Frederic M. | 3 |
Drasgow, Fritz | 2 |
Baskin, David | 1 |
Bejar, Issac I. | 1 |
Childs, Roy | 1 |
Cluxton, Sue Ellen | 1 |
Divgi, D. R. | 1 |
Frary, Robert B. | 1 |
Garcia-Perez, Miguel A. | 1 |
Glass, Gene V. | 1 |
More ▼ |
Publication Type
Reports - Research | 21 |
Speeches/Meeting Papers | 10 |
Journal Articles | 7 |
Reports - Evaluative | 5 |
Collected Works - General | 1 |
Guides - Non-Classroom | 1 |
Education Level
Audience
Researchers | 3 |
Laws, Policies, & Programs
Assessments and Surveys
Armed Services Vocational… | 2 |
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating

Lord, Frederic M. – Educational and Psychological Measurement, 1973
A group of 21 students was tested under a time limit considerably shorter than should have been allowed. This report describes a tryout of a method for estimating the power'' scores that would have been obtained if the students had had enough time to finish. (Author/CB)
Descriptors: Mathematical Models, Scoring Formulas, Statistical Analysis, Theories
Lord, Frederic M. – 1973
Omitted items cannot properly be treated as wrong when estimating ability and item parameters. A convenient method for utilizing the information provided by omissions is presented. Some theoretical and considerable empirical justification is adduced for the estimates obtained by both old and new methods. (Author)
Descriptors: Mathematical Models, Probability, Psychometrics, Research Reports

Wilcox, Rand R. – Journal of Educational Measurement, 1982
A new model for measuring misinformation is suggested. A modification of Wilcox's strong true-score model, to be used in certain situations, is indicated, since it solves the problem of correcting for guessing without assuming guessing is random. (Author/GK)
Descriptors: Achievement Tests, Guessing (Tests), Mathematical Models, Scoring Formulas
Marco, Gary L. – 1975
A method of interpolation has been derived that should be superior to linear interpolation in computing the percentile ranks of test scores for unimodal score distributions. The superiority of the logistic interpolation over the linear interpolation is most noticeable for distributions consisting of only a small number of score intervals (say…
Descriptors: Comparative Analysis, Intervals, Mathematical Models, Percentage

Wilcox, Rand R. – Educational and Psychological Measurement, 1980
Technical problems in achievement testing associated with using latent structure models to estimate the probability of guessing correct responses by examinees is studied; also the lack of problems associated with using Wilcox's formula score. Maximum likelihood estimates are derived which may be applied when items are hierarchically related.…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Maximum Likelihood Statistics
Wilcox, Rand R. – 1979
In the past, several latent structure models have been proposed for handling problems associated with measuring the achievement of examinees. Typically, however, these models describe a specific examinee in terms of an item domain or they describe a few items in terms of a population of examinees. In this paper, a model is proposed which allows a…
Descriptors: Achievement Tests, Guessing (Tests), Mathematical Models, Multiple Choice Tests

Drasgow, Fritz; And Others – Applied Psychological Measurement, 1989
Multilinear formula scoring (MFS) is reviewed, with emphasis on estimating option characteristic curves (OCSs). MFS was used to estimate OCSs for the arithmetic reasoning subtest of the Armed Services Vocational Aptitude Battery for 2,978 examinees. A second analysis obtained OCSs for simulated data. The use of MFS is discussed. (SLD)
Descriptors: Estimation (Mathematics), Mathematical Models, Multiple Choice Tests, Scores

Garcia-Perez, Miguel A.; Frary, Robert B. – Applied Psychological Measurement, 1989
Simulation techniques were used to generate conventional test responses and track the proportion of alternatives examinees could classify independently before and after taking the test. Finite-state scores were compared with these actual values and with number-correct and formula scores. Finite-state scores proved useful. (TJH)
Descriptors: Comparative Analysis, Computer Simulation, Guessing (Tests), Mathematical Models
Hester, Yvette – 1993
Some of the different approaches to standard setting are discussed. Brief comments and references are offered concerning strategies that rely primarily on the use of expert judgment. Controversy surrounds methods that use expert judges, as well as those using test groups to set scores empirically. A minimax procedure developed by H. Huynh, an…
Descriptors: Academic Standards, Classification, Cutting Scores, Evaluation Methods

Wilcox, Rand R.; Harris, Chester W. – Journal of Educational Measurement, 1977
Emrick's proposed method for determining a mastery level cut-off score is questioned. Emrick's method is shown to be useful only in limited situations. (JKS)
Descriptors: Correlation, Cutting Scores, Mastery Tests, Mathematical Models
Wilcox, Rand R. – 1978
A mastery test is frequently described as follows: an examinee responds to n dichotomously scored test items. Depending upon the examinee's observed (number correct) score, a mastery decision is made and the examinee is advanced to the next level of instruction. Otherwise, a nonmastery decision is made and the examinee is given remedial work. This…
Descriptors: Comparative Analysis, Cutting Scores, Factor Analysis, Mastery Tests

McGaw, Barry; Glass, Gene V. – American Educational Research Journal, 1980
There are difficulties in expressing effect sizes on a common metric when some studies use transformed scales to express group differences, or use factorial designs or covariance adjustments to obtain a reduced error term. A common metric on which effect sizes may be standardized is described. (Author/RL)
Descriptors: Control Groups, Error of Measurement, Mathematical Models, Research Problems

Baskin, David – Journal of Educational Measurement, 1975
Traditional test scoring does not allow the examination of differences among subjects obtaining identical raw scores on the same test. A configuration scoring paradigm for identical raw scores, which provides for such comparisons, is developed and illustrated. (Author)
Descriptors: Elementary Secondary Education, Individual Differences, Mathematical Models, Multiple Choice Tests

Kane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests

Penfield, Douglas A.; Koffler, Stephen L. – Journal of Experimental Education, 1978
Three nonparametric alternatives to the parametric Bartlett test are presented for handling the K-sample equality of variance problem. The two-sample Siegel-Tukey test, Mood test, and Klotz test are extended to the multisample situation by Puri's methods. These K-sample scale tests are illustrated and compared. (Author/GDC)
Descriptors: Comparative Analysis, Guessing (Tests), Higher Education, Mathematical Models