NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Bennett, Randy Elliot – Educational Testing Service, 2011
CBAL, an acronym for Cognitively Based Assessment of, for, and as Learning, is a research initiative intended to create a model for an innovative K-12 assessment system that provides summative information for policy makers, as well as formative information for classroom instructional purposes. This paper summarizes empirical results from 16 CBAL…
Descriptors: Educational Assessment, Elementary Secondary Education, Summative Evaluation, Formative Evaluation
Bennett, Randy Elliot; Braswell, James; Oranje, Andreas; Sandene, Brent; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2008
This article describes selected results from the Math Online (MOL) study, one of three field investigations sponsored by the National Center for Education Statistics (NCES) to explore the use of new technology in NAEP. Of particular interest in the MOL study was the comparability of scores from paper- and computer-based tests. A nationally…
Descriptors: National Competency Tests, Familiarity, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Gallagher, Ann; Bennett, Randy Elliot; Cahalan, Cara; Rock, Donald A. – Educational Assessment, 2002
Evaluated whether variance due to computer-based presentation was associated with performance on a new constructed-response type, Mathematical Expression, that requires students to enter expressions. No statistical evidence of construct-irrelevant variance was detected for the 178 undergraduate and graduate students, but some examinees reported…
Descriptors: College Students, Computer Assisted Testing, Constructed Response, Educational Technology
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Morley, Mary; Quardt, Dennis – Applied Psychological Measurement, 2000
Describes three open-ended response types that could broaden the conception of mathematical problem solving used in computerized admissions tests: (1) mathematical expression (ME); (2) generating examples (GE); and (3) and graphical modeling (GM). Illustrates how combining ME, GE, and GM can form extended constructed response problems. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Constructed Response, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sandene, Brent; Horkay, Nancy; Bennett, Randy Elliot; Allen, Nancy; Braswell, James; Kaplan, Bruce; Oranje, Andreas – National Center for Education Statistics, 2005
This publication presents the reports from two studies, Math Online (MOL) and Writing Online (WOL), part of the National Assessment of Educational Progress (NAEP) Technology-Based Assessment (TBA) project. Funded by the National Center for Education Statistics (NCES), the Technology-Based Assessment project is intended to explore the use of new…
Descriptors: Grade 8, Statistical Analysis, Scoring, Familiarity
Peer reviewed Peer reviewed
Katz, Irvin R.; Bennett, Randy Elliot; Berger, Aliza E. – Journal of Educational Measurement, 2000
Studied the solution strategies of 55 high school students who solved parallel constructed response and multiple-choice items that differed only in the presence of response options. Differences in difficulty between response formats did not correspond to differences in strategy choice. Interprets results in light of the relative comprehension…
Descriptors: College Entrance Examinations, Constructed Response, Difficulty Level, High School Students
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Journal of Educational Measurement, 1989
Causes of differential item difficulty for blind students taking the braille edition of the Scholastic Aptitude Test's mathematical section were studied. Data for 261 blind students were compared with data for 8,015 non-handicapped students. Results show an association between selected item categories and differential item functioning. (TJH)
Descriptors: Braille, College Entrance Examinations, Comparative Analysis, Difficulty Level
Singley, Mark K.; Bennett, Randy Elliot – 1995
One of the main limitations of the current generation of computer-based tests is its dependency on the multiple-choice item. This research was aimed at extending computer-based testing by bringing limited forms of performance assessment to it in the domain of mathematics. This endeavor involves not only building task types that better reflect…
Descriptors: Computer Assisted Testing, Item Analysis, Mathematics Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Journal of Educational Measurement, 1987
To identify broad classes of items on the Scholastic Aptitude Test that behave differentally for handicapped examinees taking special, extended time administrations, the performance of nine handicapped groups and one nonhandicapped group on each of two forms of the SAT was investigated through a two-stage procedure. (Author/LMO)
Descriptors: College Entrance Examinations, Disabilities, Hearing Impairments, High Schools
Bennett, Randy Elliot; And Others – 1988
This study developed, applied, and evaluated a theory-based method of detecting the underlying causes of differential difficulty. The method was applied to two subgroups taking the Scholastic Aptitude Test-Mathematics (SAT-M), 261 visually impaired students taking Braille forms of the test and 1,985 black students at 3 test administrations. It…
Descriptors: Black Students, Braille, Cluster Analysis, Difficulty Level
Bennett, Randy Elliot; And Others – 1991
This study investigated the convergent validity of expert-system scores for four mathematical constructed-response item formats. A five-factor model was proposed comprised of four constructed-response format factors and a Graduate Record Examinations (GRE) General Test quantitative factor. Subjects were drawn from examinees taking a single form of…
Descriptors: College Students, Constructed Response, Correlation, Expert Systems
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Applied Psychological Measurement, 1991
Convergent validity of expert-systems scores for 4 complex constructed-response mathematical formats was assessed for 249 examinees from the Graduate Record Examinations (GRE) General Test in June 1989. The hypothesized five-factor model fit the data well, but an alternative with two dimensions (GRE-quantitative and constructed-response)…
Descriptors: College Entrance Examinations, Constructed Response, Educational Assessment, Expert Systems
Bennett, Randy Elliot; And Others – 1995
Two computer-based categorization tasks were developed and pilot tested. In study 1, the task asked examinees to sort mathematical word problem stems according to prototypes. Results with 9 faculty members and 107 undergraduates showed that those who sorted well tended to have higher Graduate Record Examination General Test scores and college…
Descriptors: Admission (School), Classification, College Entrance Examinations, College Faculty
Bennett, Randy Elliot; Whittington, Beverly R. – 1986
The increasing use of microcomputers and hand-held calculators has implications for mathematics and science instruction, achievement testing, and educational research. The potential effects of these technologies on curricula involve both content and delivery. In mathematics instruction, the focus may shift from manipulative to higher order skills.…
Descriptors: Achievement Tests, Calculators, College Entrance Examinations, College Mathematics