NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Urrutia, Felipe; Araya, Roberto – Journal of Educational Computing Research, 2024
Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask to redo those that are incoherent. This can be a difficult task and can be time-consuming for teachers. A possible solution is to automate the detection…
Descriptors: Elementary School Students, Grade 4, Elementary School Mathematics, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
L. Hannah; E. E. Jang; M. Shah; V. Gupta – Language Assessment Quarterly, 2023
Machines have a long-demonstrated ability to find statistical relationships between qualities of texts and surface-level linguistic indicators of writing. More recently, unlocked by artificial intelligence, the potential of using machines to identify content-related writing trait criteria has been uncovered. This development is significant,…
Descriptors: Validity, Automation, Scoring, Writing Assignments
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joshua; Roscoe, Rod D. – Journal of Educational Computing Research, 2020
The present study extended research on the effectiveness of automated writing evaluation (AWE) systems. Sixth graders were randomly assigned by classroom to an AWE condition that used "Project Essay Grade Writing" (n = 56) or a word-processing condition that used Google Docs (n = 58). Effectiveness was evaluated using multiple metrics:…
Descriptors: Automation, Writing Evaluation, Feedback (Response), Instructional Effectiveness
Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R. – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2013
The Common Core assessments emphasize short essay constructed response items over multiple choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way is found to score them automatically. Current automatic essay scoring techniques are…
Descriptors: Automation, Scoring, Essay Tests, Natural Language Processing