Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 5 |
Descriptor
Automation | 5 |
Natural Language Processing | 5 |
Grade 4 | 4 |
Elementary School Students | 3 |
Grade 5 | 3 |
Scoring | 3 |
Accuracy | 2 |
Artificial Intelligence | 2 |
Computer Assisted Testing | 2 |
Essays | 2 |
Evaluation Methods | 2 |
More ▼ |
Source
Journal of Educational… | 2 |
American Educational Research… | 1 |
Language Assessment Quarterly | 1 |
National Center for Research… | 1 |
Author
Wilson, Joshua | 2 |
Araya, Roberto | 1 |
Chen, Dandan | 1 |
E. E. Jang | 1 |
Hebert, Michael | 1 |
Iseli, Markus R. | 1 |
Kerr, Deirdre | 1 |
L. Hannah | 1 |
M. Shah | 1 |
Mousavi, Hamid | 1 |
Roscoe, Rod D. | 1 |
More ▼ |
Publication Type
Reports - Research | 5 |
Journal Articles | 4 |
Education Level
Elementary Education | 5 |
Intermediate Grades | 5 |
Grade 4 | 4 |
Middle Schools | 4 |
Grade 5 | 3 |
Early Childhood Education | 2 |
Grade 3 | 2 |
Grade 6 | 2 |
Primary Education | 2 |
Junior High Schools | 1 |
Secondary Education | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Urrutia, Felipe; Araya, Roberto – Journal of Educational Computing Research, 2024
Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask to redo those that are incoherent. This can be a difficult task and can be time-consuming for teachers. A possible solution is to automate the detection…
Descriptors: Elementary School Students, Grade 4, Elementary School Mathematics, Mathematics Tests
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
L. Hannah; E. E. Jang; M. Shah; V. Gupta – Language Assessment Quarterly, 2023
Machines have a long-demonstrated ability to find statistical relationships between qualities of texts and surface-level linguistic indicators of writing. More recently, unlocked by artificial intelligence, the potential of using machines to identify content-related writing trait criteria has been uncovered. This development is significant,…
Descriptors: Validity, Automation, Scoring, Writing Assignments
Wilson, Joshua; Roscoe, Rod D. – Journal of Educational Computing Research, 2020
The present study extended research on the effectiveness of automated writing evaluation (AWE) systems. Sixth graders were randomly assigned by classroom to an AWE condition that used "Project Essay Grade Writing" (n = 56) or a word-processing condition that used Google Docs (n = 58). Effectiveness was evaluated using multiple metrics:…
Descriptors: Automation, Writing Evaluation, Feedback (Response), Instructional Effectiveness
Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R. – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2013
The Common Core assessments emphasize short essay constructed response items over multiple choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way is found to score them automatically. Current automatic essay scoring techniques are…
Descriptors: Automation, Scoring, Essay Tests, Natural Language Processing