NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Gallagher, Ann; Bennett, Randy Elliot; Cahalan, Cara; Rock, Donald A. – Educational Assessment, 2002
Evaluated whether variance due to computer-based presentation was associated with performance on a new constructed-response type, Mathematical Expression, that requires students to enter expressions. No statistical evidence of construct-irrelevant variance was detected for the 178 undergraduate and graduate students, but some examinees reported…
Descriptors: College Students, Computer Assisted Testing, Constructed Response, Educational Technology
Kaplan, Randy M.; Bennett, Randy Elliot – 1994
This study explores the potential for using a computer-based scoring procedure for the formulating-hypotheses (F-H) item. This item type presents a situation and asks the examinee to generate explanations for it. Each explanation is judged right or wrong, and the number of creditable explanations is summed to produce an item score. Scores were…
Descriptors: Automation, Computer Assisted Testing, Correlation, Higher Education
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Rock, Donald A. – Journal of Educational Measurement, 1995
Examined the generalizability and validity and examinee perceptions of a computer-delivered version of 8 formulating-hypotheses tasks administered to 192 graduate students. Results support previous research that has suggested that formulating-hypotheses items can broaden the abilities measured by graduate admissions measures. (SLD)
Descriptors: Admission (School), College Entrance Examinations, Computer Assisted Testing, Generalizability Theory
Peer reviewed Peer reviewed
Enright, Mary K.; Rock, Donald A.; Bennett, Randy Elliot – Journal of Educational Measurement, 1998
Examined alternative-item types and section configurations for improving the discriminant and convergent validity of the Graduate Record Examination (GRE) general test using a computer-based test given to 388 examinees who had taken the GRE previously. Adding new variations of logical meaning appeared to decrease discriminant validity. (SLD)
Descriptors: Admission (School), College Entrance Examinations, College Students, Computer Assisted Testing
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Applied Psychological Measurement, 1990
The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Computer Science
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Morley, Mary; Quardt, Dennis; Rock, Donald A.; Singley, Mark K.; Katz, Irvin R.; Nhouyvanisvong, Adisack – Journal of Educational Measurement, 1999
Evaluated a computer-delivered response type for measuring quantitative skill, the "Generating Examples" (GE) response type, which presents under-determined problems that can have many right answers. Results from 257 graduate students and applicants indicate that GE scores are reasonably reliable, but only moderately related to Graduate…
Descriptors: College Applicants, Computer Assisted Testing, Graduate Students, Graduate Study
Bennett, Randy Elliot; Rock, Donald A. – 1993
Formulating-Hypotheses (F-H) items present a situation and ask the examinee to generate as many explanations for it as possible. This study examined the generalizability, validity, and examinee perceptions of a computer-delivered version of the task. Eight F-H questions were administered to 192 graduate students. Half of the items restricted…
Descriptors: Computer Assisted Testing, Difficulty Level, Generalizability Theory, Graduate Students
Bennett, Randy Elliot; Sebrechts, Marc M. – 1994
This study evaluated expert system diagnoses of examinees' solutions to complex constructed-response algebra word problems. Problems were presented to three samples (30 college students each), each of which had taken the Graduate Record Examinations General Test. One sample took the problems in paper-and-pencil form and the other two on computer.…
Descriptors: Algebra, Automation, Classification, College Entrance Examinations
Bennett, Randy Elliot; And Others – 1995
Two computer-based categorization tasks were developed and pilot tested. In study 1, the task asked examinees to sort mathematical word problem stems according to prototypes. Results with 9 faculty members and 107 undergraduates showed that those who sorted well tended to have higher Graduate Record Examination General Test scores and college…
Descriptors: Admission (School), Classification, College Entrance Examinations, College Faculty