NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Solano-Flores, Guillermo; Li, Min – Educational Measurement: Issues and Practice, 2009
We addressed the challenge of scoring cognitive interviews in research involving multiple cultural groups. We interviewed 123 fourth- and fifth-grade students from three cultural groups to probe how they related a mathematics item to their personal lives. Item meaningfulness--the tendency of students to relate the content and/or context of an item…
Descriptors: Generalizability Theory, Scoring, Error of Measurement, Grade 5
Peer reviewed Peer reviewed
Direct linkDirect link
Raymond, Mark R.; Neustel, Sandra; Anderson, Dan – Educational Measurement: Issues and Practice, 2009
Examinees who take high-stakes assessments are usually given an opportunity to repeat the test if they are unsuccessful on their initial attempt. To prevent examinees from obtaining unfair score increases by memorizing the content of specific test items, testing agencies usually assign a different test form to repeat examinees. The use of multiple…
Descriptors: Test Results, Test Items, Testing, Aptitude Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sykes, Robert C.; Ito, Kyoko; Wang, Zhen – Educational Measurement: Issues and Practice, 2008
Student responses to a large number of constructed response items in three Math and three Reading tests were scored on two occasions using three ways of assigning raters: single reader scoring, a different reader for each response (item-specific), and three readers each scoring a rater item block (RIB) containing approximately one-third of a…
Descriptors: Test Items, Mathematics Tests, Reading Tests, Scoring
Peer reviewed Peer reviewed
Bauer, Ernest A. – Educational Measurement: Issues and Practice, 1985
Misreadings of pencil answer marks on test answer sheets by optical scanners cause scoring errors. Twenty-six different pencils were tested for readability differences when optically scanned. Complete light and dark marks scanned perfectly for 18 pencils. Totals for all six mark types ranged from 947 to 1726 out of 1800. (BS)
Descriptors: Answer Sheets, Elementary Secondary Education, Error of Measurement, Optical Scanners