Descriptor
Scoring Formulas | 12 |
Multiple Choice Tests | 8 |
Guessing (Tests) | 6 |
Higher Education | 5 |
Tables (Data) | 5 |
Test Reliability | 3 |
Comparative Analysis | 2 |
Difficulty Level | 2 |
Educational Research | 2 |
Sampling | 2 |
Test Items | 2 |
More ▼ |
Source
Journal of Experimental… | 12 |
Author
Hamdan, M. A. | 2 |
Bradbard, David A. | 1 |
Clawar, Harry J. | 1 |
Cross, Lawrence H. | 1 |
Frary, Robert B. | 1 |
Green, Samuel B. | 1 |
Hansen, Lee H. | 1 |
Hopkins, Thomas F. | 1 |
Hsu, Tse-Chi | 1 |
Koffler, Stephen L. | 1 |
Krutchkoff, R. G. | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 7 |
Guides - Non-Classroom | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
State Trait Anxiety Inventory | 1 |
What Works Clearinghouse Rating

Hamdan, M. A.; Krutchkoff, R. G. – Journal of Experimental Education, 1975
The separation level of grades on a multiple-choice examination as a quantitative probabilistic criterion for correct classification of students by the examination was introduced by Krutchoff. (Author)
Descriptors: Educational Research, Knowledge Level, Multiple Choice Tests, Scoring Formulas

Hamdan, M. A. – Journal of Experimental Education, 1979
The distribution theory underlying corrections for guessing is analyzed, and the probability distributions of the random variables are derived. The correction in grade, based on random guessing of unknown answers, is compared with corrections based on educated guessing. (Author/MH)
Descriptors: Guessing (Tests), Maximum Likelihood Statistics, Multiple Choice Tests, Probability

Clawar, Harry J.; Hopkins, Thomas F. – Journal of Experimental Education, 1975
The present paper emphasizes the importance of making interpretations regarding differences among an individual's scores on a test battery based on this same concern for the appropriateness of the reference group. (Author)
Descriptors: Academic Achievement, Educational Research, Measurement Instruments, Scoring Formulas

Rodriguez, T. Nelson; Hansen, Lee H. – Journal of Experimental Education, 1975
Readability formulas are designed to provide quantitative estimates of the relative difficulty of pieces of writing. This study explored the extent to which an increase in the accuracy of a specific readability formula could be obtained by norming it for a restricted set of reading materials and subjects. (Editor)
Descriptors: Cloze Procedure, Multiple Regression Analysis, Readability, Reading Materials

Zimmerman, Donald W. – Journal of Experimental Education, 1977
Derives formulas for the validity of predictor-criterion tests that hold for all test scores constructed according to the expected-value concept of true score. These more general formulas disclose some paradoxical properties of test validity under conditions where errors are correlated and have some implications for practical testing situations…
Descriptors: Correlation, Criterion Referenced Tests, Scoring Formulas, Tables (Data)

Cross, Lawrence H.; And Others – Journal of Experimental Education, 1980
Use of choice-weighted scores as a basis for assigning grades in college courses was investigated. Reliability and validity indices offer little to recommend either type of choice-weighted scoring over number-right scoring. The potential for choice-weighted scoring to enhance the teaching/testing process is discussed. (Author/GK)
Descriptors: Credit Courses, Grading, Higher Education, Multiple Choice Tests

Frary, Robert B.; And Others – Journal of Experimental Education, 1977
To date a theoretical basis has not been developed for determining changes in reliability when score points from random guessing are eliminated and those from non-randon guessing are retained. This paper presents a derivation of an expression for the reliability coefficient which displays the effect of deleting score components due to random…
Descriptors: Data Analysis, Guessing (Tests), Multiple Choice Tests, Scoring Formulas

Penfield, Douglas A.; Koffler, Stephen L. – Journal of Experimental Education, 1978
Three nonparametric alternatives to the parametric Bartlett test are presented for handling the K-sample equality of variance problem. The two-sample Siegel-Tukey test, Mood test, and Klotz test are extended to the multisample situation by Puri's methods. These K-sample scale tests are illustrated and compared. (Author/GDC)
Descriptors: Comparative Analysis, Guessing (Tests), Higher Education, Mathematical Models

Bradbard, David A.; Green, Samuel B. – Journal of Experimental Education, 1986
The effectiveness of the Coombs elimination procedure was evaluated with 29 college students enrolled in a statistics course. Five multiple-choice tests were employed and scored using the Coombs procedure. Results suggest that the Coombs procedure decreased guessing, and this effect increased over the grading period. (Author/LMO)
Descriptors: Analysis of Variance, College Students, Guessing (Tests), Higher Education

Plake, Barbara S.; And Others – Journal of Experimental Education, 1981
Number right and elimination scores were analyzed on a college level mathematics exam assembled from pretest data. Anxiety measures were administered along with the experimental forms to undergraduates. Results suggest that neither test scores nor attitudes are influenced by item order knowledge thereof, or anxiety level. (Author/GK)
Descriptors: College Mathematics, Difficulty Level, Higher Education, Multiple Choice Tests

Hsu, Tse-Chi; And Others – Journal of Experimental Education, 1984
The indices of item difficulty and discrimination, the coefficients of effective length, and the average item information for both single- and multiple-answer items using six different scoring formulas were computed and compared. These formulas vary in terms of the assignment of partial credit and the correction for guessing. (Author/BW)
Descriptors: College Entrance Examinations, Comparative Analysis, Difficulty Level, Guessing (Tests)

Wilcox, Rand R. – Journal of Experimental Education, 1982
A closed sequential procedure for estimating true score is proposed for use with answer-until-correct tests. The accuracy of determining true score is the same as in conventional sequential solutions, but the possibility of using an unnecessarily large number of items is eliminated. (Author/CM)
Descriptors: Answer Sheets, Guessing (Tests), Item Banks, Measurement Techniques