Descriptor
Computer Science | 5 |
Scoring | 4 |
Multiple Choice Tests | 3 |
Test Items | 3 |
College Students | 2 |
Comparative Testing | 2 |
Constructed Response | 2 |
Educational Technology | 2 |
Expert Systems | 2 |
Factor Structure | 2 |
Goodness of Fit | 2 |
More ▼ |
Author
Bennett, Randy Elliot | 5 |
Martinez, Michael E. | 1 |
Publication Type
Journal Articles | 3 |
Reports - Evaluative | 2 |
Reports - Research | 2 |
Information Analyses | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 4 |
What Works Clearinghouse Rating
Bennett, Randy Elliot; And Others – 1988
This study investigated the extent of agreement between MicroPROUST, a prototype microcomputer-based expert scoring system, and human readers for two Advanced Placement Computer Science free-response items. To assess agreement, a balanced incomplete block design was used with 2 groups of 4 readers grading 43 student solutions to the first problem…
Descriptors: Advanced Placement, Computer Science, Constructed Response, Educational Technology

Bennett, Randy Elliot; And Others – Applied Psychological Measurement, 1990
The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Computer Science
Bennett, Randy Elliot; And Others – 1989
This study examined the relationship of a machine-scorable, constrained free-response computer science item that required the student to debug a faulty program to two other types of items: multiple-choice and free-response requiring production of a computer program. The free-response items were from the College Board's Advanced Placement Computer…
Descriptors: College Students, Computer Science, Computer Software, Debugging (Computers)

Martinez, Michael E.; Bennett, Randy Elliot – Applied Measurement in Education, 1992
New developments in the use of automatically scorable constructed response item types for large-scale assessment are reviewed for five domains: (1) mathematical reasoning; (2) algebra problem solving; (3) computer science; (4) architecture; and (5) natural language. Ways in which these technologies are likely to shape testing are considered. (SLD)
Descriptors: Algebra, Architecture, Automation, Computer Science

Bennett, Randy Elliot; And Others – Journal of Educational Measurement, 1991
The relationship of multiple-choice and free-response items on the College Board's Advanced Placement Computer Science Examination was studied using confirmatory factor analysis. Results with 2 samples of 1,000 high school students suggested that the most parsimonious fit was achieved using a single factor. Implications for construct validity are…
Descriptors: Chi Square, College Entrance Examinations, Comparative Testing, Computer Science