NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Audience
Researchers3
Location
Canada1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
Green, Kathy – Journal of Experimental Education, 1979
Reliabilities and concurrent validities of teacher-made multiple-choice and true-false tests were compared. No significant differences were found even when multiple-choice reliability was adjusted to equate testing time. (Author/MH)
Descriptors: Comparative Testing, Higher Education, Multiple Choice Tests, Test Format
Thiede, Keith W.; And Others – 1991
A correlational analysis was performed to examine the relationship between recognition and recall test formats. A total of 236 college students completed one of four 80-item general knowledge tests; the forms contained 20 items of each of four formats: (1) true; (2) false; (3) multiple-choice; and (4) free response. Ninety-three of the subjects…
Descriptors: Cognitive Processes, College Students, Comparative Testing, Correlation
Blumberg, Phyllis – 1980
To help determine the role that test instrument formats play in evaluation, two parallel examinations were given to 227 second-year medical students. The tests were based on information presented in a medical case history. One required students to generate their own problem lists (the generate group); the other required the students to select…
Descriptors: Clinical Diagnosis, Comparative Testing, Cues, Higher Education
Peer reviewed Peer reviewed
Kolstad, Rosemarie K.; Kolstad, Robert A. – Educational Research Quarterly, 1989
The effect on examinee performance of the rule that multiple-choice (MC) test items require the acceptance of 1 choice was examined for 106 dental students presented with choices in MC and multiple true-false formats. MC items force examinees to select one choice, which causes artificial acceptance of correct/incorrect choices. (SLD)
Descriptors: Comparative Testing, Dental Students, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Measurement in Education, 1991
The use of the "none-of-the-above" option (NOTA) in 20 college-level multiple-choice tests was evaluated for classes with 100 or more students. Eight academic disciplines were represented, and 295 NOTA and 724 regular test items were used. It appears that the NOTA can be compatible with good classroom measurement. (TJH)
Descriptors: College Students, Comparative Testing, Difficulty Level, Discriminant Analysis
Sawyer, Richard; Welch, Catherine – 1990
The frequency of multiple testing on the Proficiency Examination Program (PEP) multiple-choice tests, the characteristics of examinees who retest, and the effects of retesting on test scores were examined. Tests in the PEP program cover a broad range of academic disciplines and generally include material covered in one or two semesters of an…
Descriptors: Achievement Gains, Achievement Tests, College Students, Comparative Testing
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Applied Psychological Measurement, 1990
The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Computer Science
Peer reviewed Peer reviewed
Crehan, Kevin D.; And Others – Educational and Psychological Measurement, 1993
Studies with 220 college students found that multiple-choice test items with 3 items are more difficult than those with 4 items, and items with the none-of-these option are more difficult than those without this option. Neither format manipulation affected item discrimination. Implications for test construction are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Difficulty Level, Distractors (Tests)
Peer reviewed Peer reviewed
Bridgeman, Brent; Rock, Donald A. – Journal of Educational Measurement, 1993
Exploratory and confirmatory factor analyses were used to explore relationships among existing item types and three new computer-administered item types for the analytical scale of the Graduate Record Examination General Test. Results with 349 students indicate constructs the item types are measuring. (SLD)
Descriptors: College Entrance Examinations, College Students, Comparative Testing, Computer Assisted Testing
PDF pending restoration PDF pending restoration
Anderson, Paul S.; Hyers, Albert D. – 1991
Three descriptive statistics (difficulty, discrimination, and reliability) of multiple-choice (MC) test items were compared to those of a new (1980s) format of machine-scored questions. The new method, answer-bank multi-digit testing (MDT), uses alphabetized lists of up to 1,000 alternatives and approximates the completion style of assessment…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Correlation
Cizek, Gregory J. – 1991
A commonly accepted rule for developing equated examinations using the common-items non-equivalent groups (CINEG) design is that items common to the two examinations being equated should be identical. The CINEG design calls for two groups of examinees to respond to a set of common items that is included in two examinations. In practice, this rule…
Descriptors: Certification, Comparative Testing, Difficulty Level, Higher Education
Green, Kathy – 1978
Forty three-option multiple choice (MC) statements on a midterm examination were converted to 120 true-false (TF) statements, identical in content. Test forms (MC and TF) were randomly administered to 50 undergraduates, to investigate the validity and internal consistency reliability of the two forms. A Kuder-Richardson formula 20 reliability was…
Descriptors: Achievement Tests, Comparative Testing, Higher Education, Multiple Choice Tests
Breland, Hunter M.; And Others – 1987
Six university English departments collaborated in this examination of the differences between multiple-choice and essay tests in evaluating writing skills. The study also investigated ways the two tools can complement one another, ways to improve cost effectiveness of essay testing, and ways to integrate assessment and the educational process.…
Descriptors: Comparative Testing, Efficiency, Essay Tests, Higher Education
Peer reviewed Peer reviewed
Harasym, P. H.; And Others – Evaluation and the Health Professions, 1980
Coded, as opposed to free response items, in a multiple choice physiology test had a cueing effect which raised students' scores, especially for lower achievers. Reliability of coded items was also lower. Item format and scoring method had an effect on test results. (GDC)
Descriptors: Achievement Tests, Comparative Testing, Cues, Higher Education
Dowd, Steven B. – 1992
An alternative to multiple-choice (MC) testing is suggested as it pertains to the field of radiologic technology education. General principles for writing MC questions are given and contrasted with a new type of MC question, the alternate-choice (AC) question, in which the answer choices are embedded in the question in a short form that resembles…
Descriptors: Comparative Testing, Difficulty Level, Evaluation Methods, Higher Education
Previous Page | Next Page ยป
Pages: 1  |  2  |  3