NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 301 to 315 of 582 results Save | Export
Peer reviewed Peer reviewed
Duncan, George T.; Milton, E. O. – Psychometrika, 1978
A multiple-answer multiple-choice test is one which offers several alternate choices for each stem and any number of those choices may be considered to be correct. In this article, a class of scoring procedures called the binary class is discussed. (Author/JKS)
Descriptors: Answer Keys, Measurement Techniques, Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
McGarvey, Bill; And Others – Applied Psychological Measurement, 1977
The most consistently used scoring system for the rod-and-frame task has been the total number of degrees in error from the true vertical. Since a logical case can be made for at least four alternative scoring systems, a thorough comparison of all five systems was performed. (Author/CTM)
Descriptors: Analysis of Variance, Cognitive Style, Cognitive Tests, Elementary Education
Peer reviewed Peer reviewed
Essex, Diane L. – Journal of Medical Education, 1976
Two multiple-choice scoring schemes--a partial credit scheme and a dichotomous approach--were compared analyzing means, variances, and reliabilities on alternate measures and student reactions. Students preferred the partial-credit approach, which is recommended if rewarding for partial knowledge is an important concern. (Editor/JT)
Descriptors: Higher Education, Medical Students, Multiple Choice Tests, Reliability
Peer reviewed Peer reviewed
Oosterhof, Albert C. – Educational Measurement: Issues and Practice, 1987
This module describes a method for weighting various measures of student achievement, such as examinations and home assignments, in order to combine these measures into a final grade. Standard deviation methods receive extensive attention. (TJH)
Descriptors: Criterion Referenced Tests, Evaluation Criteria, Grading, Norm Referenced Tests
Peer reviewed Peer reviewed
Dorans, Neil J. – Journal of Educational Measurement, 1986
The analytical decomposition demonstrates how the effects of item characteristics, test properties, individual examinee responses, and rounding rules combine to produce the item deletion effect on the equating/scaling function and candidate scores. The empirical portion of the report illustrates the effects of item deletion on reported score…
Descriptors: Difficulty Level, Equated Scores, Item Analysis, Latent Trait Theory
Peer reviewed Peer reviewed
Harris, Albert J.; Jacobson, Milton D. – Journal of Reading, 1976
Describes a computer formula which measures readability according to how well high school seniors comprehend reading passages. (RB)
Descriptors: Grade 12, Readability, Readability Formulas, Reading Achievement
Meyer, Robert H. – NISE Brief, 2000
This issue of NISE Brief discusses the weakness of the most commonly used educational outcome indicators--average and median test scores and proficiency-level indicators--and the advantages of value-added indicators. It offers a critique of the average test score as a measure of school and program performance as an example based on national data.…
Descriptors: Elementary Secondary Education, Mathematics Education, Outcomes of Education, Program Evaluation
Camara, Wayne J. – College Entrance Examination Board, 2003
The essay on the writing section of the SAT will be scored using a holistic approach. In holistic scoring, a piece of writing is considered as a total piece of work--the whole of which is greater than the sum of its parts.
Descriptors: Scoring, Essay Tests, Holistic Approach, Scoring Formulas
Peer reviewed Peer reviewed
Harris, Chester W. – Journal of Educational Measurement, 1973
A brief note presenting algebraically equivalent formulas for the variances of three error types. (Author)
Descriptors: Algebra, Analysis of Covariance, Analysis of Variance, Error of Measurement
Peer reviewed Peer reviewed
Scott, William A. – Educational and Psychological Measurement, 1972
Descriptors: Item Sampling, Mathematical Applications, Scoring Formulas, Statistical Analysis
Peer reviewed Peer reviewed
Gleser, Leon Jay – Educational and Psychological Measurement, 1972
Paper is concerned with the effect that ipsative scoring has upon a commonly used index of between-subtest correlation. (Author)
Descriptors: Comparative Analysis, Forced Choice Technique, Mathematical Applications, Measurement Techniques
Peer reviewed Peer reviewed
Collet, Leverne S. – Journal of Educational Measurement, 1971
The purpose of this paper was to provide an empirical test of the hypothesis that elimination scores are more reliable and valid than classical corrected-for-guessing scores or weighted-choice scores. The evidence presented supports the hypothesized superiority of elimination scoring. (Author)
Descriptors: Evaluation, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
van den Brink, Wulfert – Evaluation in Education: International Progress, 1982
Binomial models for domain-referenced testing are compared, emphasizing the assumptions underlying the beta-binomial model. Advantages and disadvantages are discussed. A proposed item sampling model is presented which takes the effect of guessing into account. (Author/CM)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Sampling, Measurement Techniques
Peer reviewed Peer reviewed
Spencer, Ernest – Scottish Educational Review, 1981
Using data from the SCRE Criterion Test composition papers, the author tests the hypothesis that the bulk of inter-marker unreliability is caused by inter-marker inconsistency--which is not correctable statistically. He suggests that a shift to "consensus" standards will realize greater improvements than statistical standardizing alone.…
Descriptors: Achievement Tests, English Instruction, Essay Tests, Reliability
Atkinson, George F.; Doadt, Edward – Assessment in Higher Education, 1980
Some perceived difficulties with conventional multiple choice tests are mentioned, and a modified form of examination is proposed. It uses a computer program to award partial marks for partially correct answers, full marks for correct answers, and check for widespread misunderstanding of an item or subject. (MSE)
Descriptors: Achievement Tests, Computer Assisted Testing, Higher Education, Multiple Choice Tests
Pages: 1  |  ...  |  17  |  18  |  19  |  20  |  21  |  22  |  23  |  24  |  25  |  ...  |  39