NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Applied Psychological Measurement, 1990
The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Computer Science
Peer reviewed Peer reviewed
Traub, Ross E.; Fisher, Charles W. – Applied Psychological Measurement, 1977
Two sets of mathematical reasoning and two sets of verbal comprehension items were cast into each of three formats--constructed response, standard multiple-choice, and Coombs multiple-choice--in order to assess whether tests with identical content but different formats measure the same attribute. (Author/CTM)
Descriptors: Comparative Testing, Confidence Testing, Constructed Response, Factor Analysis
Peer reviewed Peer reviewed
Birenbaum, Menucha; And Others – Applied Psychological Measurement, 1992
The effect of multiple-choice (MC) or open-ended (OE) response format on diagnostic assessment of algebra test performance was investigated with 231 eighth and ninth graders in Tel Aviv (Israel) using bug or rule space analysis. Both analyses indicated closer similarity between parallel OE subsets than between stem-equivalent OE and MC subsets.…
Descriptors: Algebra, Comparative Testing, Educational Assessment, Educational Diagnosis
Peer reviewed Peer reviewed
van den Bergh, Huub – Applied Psychological Measurement, 1990
In this study, 590 third graders from 12 Dutch schools took 32 tests indicating 16 semantic Structure-of-Intellect (SI) abilities and 1 of 4 reading comprehension tests, involving either multiple-choice or open-ended items. Results indicate that item type for reading comprehension is congeneric with respect to SI abilities measured. (TJH)
Descriptors: Comparative Testing, Computer Assisted Testing, Construct Validity, Elementary Education