NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raczynski, Kevin; Cohen, Allan – Applied Measurement in Education, 2018
The literature on Automated Essay Scoring (AES) systems has provided useful validation frameworks for any assessment that includes AES scoring. Furthermore, evidence for the scoring fidelity of AES systems is accumulating. Yet questions remain when appraising the scoring performance of AES systems. These questions include: (a) which essays are…
Descriptors: Essay Tests, Test Scoring Machines, Test Validity, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Bardhoshi, Gerta; Erford, Bradley T. – Measurement and Evaluation in Counseling and Development, 2017
Precision is a key facet of test development, with score reliability determined primarily according to the types of error one wants to approximate and demonstrate. This article identifies and discusses several primary forms of reliability estimation: internal consistency (i.e., split-half, KR-20, a), test-retest, alternate forms, interscorer, and…
Descriptors: Scores, Test Reliability, Accuracy, Pretests Posttests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Merrel, Jeremy D.; Cirillo, Pier F.; Schwartz, Pauline M.; Webb, Jeffrey A. – Higher Education Studies, 2015
Multiple choice testing is a common but often ineffective method for evaluating learning. A newer approach, however, using Immediate Feedback Assessment Technique (IF AT®, Epstein Educational Enterprise, Inc.) forms, offers several advantages. In particular, a student learns immediately if his or her answer is correct and, in the case of an…
Descriptors: Multiple Choice Tests, Feedback (Response), Evaluation Methods, Guessing (Tests)