NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
He, Tung-hsien – SAGE Open, 2019
This study employed a mixed-design approach and the Many-Facet Rasch Measurement (MFRM) framework to investigate whether rater bias occurred between the onscreen scoring (OSS) mode and the paper-based scoring (PBS) mode. Nine human raters analytically marked scanned scripts and paper scripts using a six-category (i.e., six-criterion) rating…
Descriptors: Computer Assisted Testing, Scoring, Item Response Theory, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Williamson, David – ETS Research Report Series, 2018
Notable mean score differences for the "e-rater"® automated scoring engine and for humans for essays from certain demographic groups were observed for the "GRE"® General Test in use before the major revision of 2012, called rGRE. The use of e-rater as a check-score model with discrepancy thresholds prevented an adverse impact…
Descriptors: Scores, Computer Assisted Testing, Test Scoring Machines, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Lai, Yi-hsiu – British Journal of Educational Technology, 2010
The purpose of this study was to investigate problems and potentials of new technologies in English writing education. The effectiveness of automated writing evaluation (AWE) ("MY Access") and of peer evaluation (PE) was compared. Twenty-two English as a foreign language (EFL) learners in Taiwan participated in this study. They submitted…
Descriptors: Feedback (Response), Writing Evaluation, Peer Evaluation, Grading