NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lahner, Felicitas-Maria; Lörwald, Andrea Carolin; Bauer, Daniel; Nouns, Zineb Miriam; Krebs, René; Guttormsen, Sissel; Fischer, Martin R.; Huwendiek, Sören – Advances in Health Sciences Education, 2018
Multiple true-false (MTF) items are a widely used supplement to the commonly used single-best answer (Type A) multiple choice format. However, an optimal scoring algorithm for MTF items has not yet been established, as existing studies yielded conflicting results. Therefore, this study analyzes two questions: What is the optimal scoring algorithm…
Descriptors: Scoring Formulas, Scoring Rubrics, Objective Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Tasdemir, Mehmet – Journal of Instructional Psychology, 2010
This study aims at comparing the difficulty levels, discrimination powers and powers of testing achievement of multiple choice tests and true-false tests, and thus revealing the rightness or wrongness of the commonly believed hypothesis that multiple choice tests don't bear the same properties as true-false tests. The research was performed with…
Descriptors: Achievement Tests, Multiple Choice Tests, Objective Tests, Student Evaluation
Peer reviewed Peer reviewed
Ebel, Robert L. – Journal of Educational Measurement, 1975
Descriptors: Comparative Analysis, Multiple Choice Tests, Objective Tests, Teachers
Peer reviewed Peer reviewed
Zimmerman, Donald W.; And Others – Journal of Experimental Education, 1984
Three types of test were compared: a completion test, a matching test, and a multiple-choice test. The completion test was more reliable than the matching test, and the matching test was more reliable than the multiple-choice test. (Author/BW)
Descriptors: Comparative Analysis, Error of Measurement, Higher Education, Mathematical Models
Ebel, Robert L. – 1971
The suggestion that multiple-choice items can be converted to true-false items without essentially changing what the item measures and with possible improvement in efficiency is investigated. Each of the 90 four-choice items in a natural science test was rewritten into a pair of true-false items, one true, one false. The resulting 180 items were…
Descriptors: Achievement Tests, Comparative Analysis, Item Analysis, Multiple Choice Tests
Oosterhof, Albert C.; Glasnapp, Douglas R. – 1972
The present study was concerned with several currently unanswered questions, two of which are: what is an empirically determined ratio of multiple choice to equivalent true-false items which can be answered in a given amount of time?; and for achievement test items administered within a classroom situation, which of the two formats under…
Descriptors: Comparative Analysis, Guessing (Tests), Multiple Choice Tests, Objective Tests
Koehler, Roger A. – 1972
This paper provides substantial evidence in favor of the continued use of conventional objective testing procedures in lieu of either the Coombs' cross-out technique or the Dressel and Schmid free-choice response procedure. From the studies presented in this paper, the tendency is for the cross-out and the free choice methods to yield a decrement…
Descriptors: Comparative Analysis, Guessing (Tests), Objective Tests, Response Style (Tests)
Peer reviewed Peer reviewed
Veal, Ramon L.; Hudson, Sally Ann – Research in the Teaching of English, 1983
Reports on a study that compared the potential validity, reliability, and costs of several direct and indirect measures of writing. (HOD)
Descriptors: Comparative Analysis, Educational Assessment, Holistic Evaluation, Objective Tests
Stiggins, Richard J. – 1981
An area of current concern is that of the advantages and disadvantages of measuring writing proficiency directly via writing samples, and indirectly via objective tests. Much research has been completed documenting the correlation between direct and indirect measures. However, there had not yet been a systematic and detailed conceptual analysis…
Descriptors: Comparative Analysis, Elementary Secondary Education, Evaluation Methods, Higher Education
Peer reviewed Peer reviewed
Allison, Donald E. – Alberta Journal of Educational Research, 1984
Reports that no significant difference in reliability appeared between a heterogeneous and a homogeneous form of the same general science matching-item test administered to 316 sixth-grade students but that scores on the heterogeneous form of the test were higher, independent of the examinee's sex or intelligence. (SB)
Descriptors: Comparative Analysis, Comparative Testing, Elementary Education, Grade 6
Hodgson, Mary L.; Cramer, Stanley H. – Measurement and Evaluation in Guidance, 1977
An increasing number of occupational and educational planning aids are relying on self-estimates of aptitudes rather than objective measures of abilities. A preliminary examination of the accuracy of self-estimated aptitudes with high school seniors (N=74) suggests unsubstantiated self-estimates may lead to decisions based on unrealistic…
Descriptors: Career Development, Comparative Analysis, High School Students, Interest Inventories
Peer reviewed Peer reviewed
Frisbie, David A.; Sweeney, Daryl C. – Journal of Educational Measurement, 1982
A 100-item five-choice multiple choice (MC) biology final exam was converted to multiple choice true-false (MTF) form to yield two content-parallel test forms comprised of the two item types. Students found the MTF items easier and preferred MTF over MC; the MTF subtests were more reliable. (Author/GK)
Descriptors: Biology, College Science, Comparative Analysis, Difficulty Level
Cummings, Oliver W. – Measurement and Evaluation in Guidance, 1981
Examined the effects on their test performance of junior high school students changing responses. Results indicated that changing answers neither increases the reliability nor decreases the standard error of measurement of the test. (Author/RC)
Descriptors: Change, Comparative Analysis, Error of Measurement, Junior High Schools
Peer reviewed Peer reviewed
Anderson, Paul S.; And Others – Illinois School Research and Development, 1985
Concludes that the Multi-Digit Test stimulates better retention than multiple choice tests while offering the advantage of computerized scoring and analysis. (FL)
Descriptors: Comparative Analysis, Computer Assisted Testing, Educational Research, Higher Education