NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jancarík, Antonín; Kostelecká, Yvona – Electronic Journal of e-Learning, 2015
Electronic testing has become a regular part of online courses. Most learning management systems offer a wide range of tools that can be used in electronic tests. With respect to time demands, the most efficient tools are those that allow automatic assessment. The presented paper focuses on one of these tools: matching questions in which one…
Descriptors: Online Courses, Computer Assisted Testing, Test Items, Scoring Formulas
Peer reviewed Peer reviewed
Hutchinson, T. P. – Contemporary Educational Psychology, 1980
In scoring multiple-choice tests, a score of 1 is given to right answers, 0 to unanswered questions, and some negative score to wrong answers. This paper discusses the relation of this negative score to the assumption made about the partial knowledge with the subjects may have. (Author/GDC)
Descriptors: Guessing (Tests), Knowledge Level, Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Measurement in Education, 1989
Multiple-choice response and scoring methods that attempt to determine an examinee's degree of knowledge about each item in order to produce a total test score are reviewed. There is apparently little advantage to such schemes; however, they may have secondary benefits such as providing feedback to enhance learning. (SLD)
Descriptors: Knowledge Level, Multiple Choice Tests, Scoring, Scoring Formulas
Peer reviewed Peer reviewed
Hamdan, M. A.; Krutchkoff, R. G. – Journal of Experimental Education, 1975
The separation level of grades on a multiple-choice examination as a quantitative probabilistic criterion for correct classification of students by the examination was introduced by Krutchoff. (Author)
Descriptors: Educational Research, Knowledge Level, Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Psychological Measurement, 1980
Six scoring methods for assigning weights to right or wrong responses according to various instructions given to test takers are analyzed with respect to expected change scores and the effect of various levels of information and misinformation. Three of the methods provide feedback to the test taker. (Author/CTM)
Descriptors: Guessing (Tests), Knowledge Level, Multiple Choice Tests, Scores
Pugh, Richard C.; Brunza, J. Jay – 1974
An examinee is required to express his confidence in the correctness of each choice of a multiple-choice item in a probabilistic test. For the responses to be valid indicators the confidence expressed in each choice should be determined by an examinees' knowledge. This study assessed the relationship of the certainty of examinees' responses to…
Descriptors: Behavior, Confidence Testing, Guessing (Tests), Individual Characteristics
Peer reviewed Peer reviewed
Aiken, Lewis R.; Williams, Newsom – Educational and Psychological Measurement, 1978
Seven formulas for scoring test items with two options (true-false or multiple choice with only two choices) were investigated. Several conditions, such as varying directions for guessing and whether testees had prior knowledge of the proportions of false items on the test were also investigated. (Author/JKS)
Descriptors: Guessing (Tests), Higher Education, Knowledge Level, Multiple Choice Tests
Bruno, James E.; Opp, Ronald D. – 1985
The admissable probability measurement (APM) format was used to score a criterion referenced language arts test administered in an inner city junior high school. Its 30 items covered capitalization, punctuation, parts of speech, and sentence analysis. With APM, students indicate their confidence in their answer choice, and guessing is heavily…
Descriptors: Confidence Testing, Criterion Referenced Tests, Educational Testing, Equivalency Tests