Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Computer Assisted Testing | 3 |
Grade 7 | 3 |
Scoring | 3 |
Automation | 2 |
Essays | 2 |
Writing Evaluation | 2 |
Algorithms | 1 |
Concept Formation | 1 |
Construct Validity | 1 |
Electronic Equipment | 1 |
Elementary School Students | 1 |
More ▼ |
Author
Burstein, Jill | 1 |
Gerard, Libby F. | 1 |
Linn, Marcia | 1 |
Madnani, Nitin | 1 |
Myers, Matthew C. | 1 |
O'Reilly, Tenaha | 1 |
Sabatini, John | 1 |
Wilson, Joshua | 1 |
Publication Type
Reports - Research | 3 |
Speeches/Meeting Papers | 2 |
Journal Articles | 1 |
Education Level
Elementary Education | 3 |
Grade 7 | 3 |
Junior High Schools | 3 |
Middle Schools | 3 |
Secondary Education | 3 |
Grade 6 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
High Schools | 1 |
Intermediate Grades | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Gerard, Libby F.; Linn, Marcia – AERA Online Paper Repository, 2016
We investigate how technologies that automatically score student written essays and assign individualized guidance can support student writing and revision in science. We used the automated scoring tools to assign guidance for student written essays in an online science unit, and studied how students revised their essays based on the guidance and…
Descriptors: Science Instruction, Technical Writing, Revision (Written Composition), Grade 7
Madnani, Nitin; Burstein, Jill; Sabatini, John; O'Reilly, Tenaha – Grantee Submission, 2013
We introduce a cognitive framework for measuring reading comprehension that includes the use of novel summary-writing tasks. We derive NLP features from the holistic rubric used to score the summaries written by students for such tasks and use them to design a preliminary, automated scoring system. Our results show that the automated approach…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Reading Comprehension