NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Educational Testing Service, 2011
Choosing whether to test via computer is the most difficult and consequential decision the designers of a testing program can make. The decision is difficult because of the wide range of choices available. Designers can choose where and how often the test is made available, how the test items look and function, how those items are combined into…
Descriptors: Test Items, Testing Programs, Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Islam, A. K. M. Najmul – Journal of Information Systems Education, 2011
This paper examines factors that influence the post-adoption satisfaction of educators with e-learning systems. Based on the expectation-confirmation framework, we propose a research model that demonstrates how post-adoption beliefs affect post-adoption satisfaction. The model was tested at a university by educators (n = 175) who use an e-learning…
Descriptors: Electronic Learning, Testing Programs, Participant Satisfaction, Teacher Attitudes