NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Sunbok; Choi, Youn-Jeng; Cohen, Allan S. – International Journal of Assessment Tools in Education, 2018
A simulation study is a useful tool in examining how validly item response theory (IRT) models can be applied in various settings. Typically, a large number of replications are required to obtain the desired precision. However, many standard software packages in IRT, such as MULTILOG and BILOG, are not well suited for a simulation study requiring…
Descriptors: Item Response Theory, Simulation, Replication (Evaluation), Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Williamson, David M.; Xi, Xiaoming; Breyer, F. Jay – Educational Measurement: Issues and Practice, 2012
A framework for evaluation and use of automated scoring of constructed-response tasks is provided that entails both evaluation of automated scoring as well as guidelines for implementation and maintenance in the context of constantly evolving technologies. Consideration of validity issues and challenges associated with automated scoring are…
Descriptors: Automation, Scoring, Evaluation, Guidelines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation