Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 15 |
Descriptor
Source
Grantee Submission | 15 |
Author
Litman, Diane | 3 |
Zhang, Haoran | 3 |
Correnti, Richard | 2 |
Crossley, Scott A. | 2 |
Matsumura, Lindsay Clare | 2 |
McNamara, Danielle S. | 2 |
Rod D. Roscoe | 2 |
Roscoe, Rod D. | 2 |
Wilson, Joshua | 2 |
Aaron D. Likens | 1 |
Alexandria Raiche | 1 |
More ▼ |
Publication Type
Reports - Research | 14 |
Journal Articles | 4 |
Speeches/Meeting Papers | 4 |
Tests/Questionnaires | 3 |
Dissertations/Theses -… | 1 |
Education Level
Elementary Education | 5 |
High Schools | 4 |
Higher Education | 3 |
Postsecondary Education | 3 |
Secondary Education | 3 |
Adult Education | 1 |
Early Childhood Education | 1 |
Grade 1 | 1 |
Grade 10 | 1 |
Grade 2 | 1 |
Grade 3 | 1 |
More ▼ |
Audience
Location
Illinois | 1 |
Kentucky | 1 |
Louisiana | 1 |
Massachusetts | 1 |
Pennsylvania | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 2 |
ACT Assessment | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Zhang, Haoran; Litman, Diane – Grantee Submission, 2021
Human essay grading is a laborious task that can consume much time and effort. Automated Essay Scoring (AES) has thus been proposed as a fast and effective solution to the problem of grading student writing at scale. However, because AES typically uses supervised machine learning, a human-graded essay corpus is still required to train the AES…
Descriptors: Essays, Grading, Writing Evaluation, Computational Linguistics
Andrew Potter; Mitchell Shortt; Maria Goldshtein; Rod D. Roscoe – Grantee Submission, 2025
Broadly defined, academic language (AL) is a set of lexical-grammatical norms and registers commonly used in educational and academic discourse. Mastery of academic language in writing is an important aspect of writing instruction and assessment. The purpose of this study was to use Natural Language Processing (NLP) tools to examine the extent to…
Descriptors: Academic Language, Natural Language Processing, Grammar, Vocabulary Skills
Kathryn S. McCarthy; Rod D. Roscoe; Laura K. Allen; Aaron D. Likens; Danielle S. McNamara – Grantee Submission, 2022
The benefits of writing strategy feedback are well established. This study examined the extent to which adding spelling and grammar checkers support writing and revision in comparison to providing writing strategy feedback alone. High school students (n = 119) wrote and revised six persuasive essays in Writing Pal, an automated writing evaluation…
Descriptors: High School Students, Automation, Writing Evaluation, Computer Software
Litman, Diane; Zhang, Haoran; Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine – Grantee Submission, 2021
Automated Essay Scoring (AES) can reliably grade essays at scale and reduce human effort in both classroom and commercial settings. There are currently three dominant supervised learning paradigms for building AES models: feature-based, neural, and hybrid. While feature-based models are more explainable, neural network models often outperform…
Descriptors: Essays, Writing Evaluation, Models, Accuracy
Crossley, Scott; Wan, Qian; Allen, Laura; McNamara, Danielle – Grantee Submission, 2021
Synthesis writing is widely taught across domains and serves as an important means of assessing writing ability, text comprehension, and content learning. Synthesis writing differs from other types of writing in terms of both cognitive and task demands because it requires writers to integrate information across source materials. However, little is…
Descriptors: Writing Skills, Cognitive Processes, Essays, Cues
Lynette Hazelton; Jessica Nastal; Norbert Elliot; Jill Burstein; Daniel F. McCaffrey – Grantee Submission, 2021
In writing studies research, automated writing evaluation technology is typically examined for a specific, often narrow purpose: to evaluate a particular writing improvement measure, to mine data for changes in writing performance, or to demonstrate the effectiveness of a single technology and accompanying validity arguments. This article adopts a…
Descriptors: Formative Evaluation, Writing Evaluation, Automation, Natural Language Processing
Wilson, Joshua; Huang, Yue; Palermo, Corey; Beard, Gaysha; MacArthur, Charles A. – Grantee Submission, 2021
This study examined a naturalistic, districtwide implementation of an automated writing evaluation (AWE) software program called "MI Write" in elementary schools. We specifically examined the degree to which aspects of MI Write were implemented, teacher and student attitudes towards MI Write, and whether MI Write usage along with other…
Descriptors: Automation, Writing Evaluation, Feedback (Response), Computer Software

Amy Adair – Grantee Submission, 2024
Developing models, using mathematics, and constructing explanations are three practices essential for science inquiry learning according to education reform efforts, such as the Next Generation Science Standards (NGSS Lead States, 2013). However, students struggle with these intersecting practices, especially when developing and interpreting…
Descriptors: Artificial Intelligence, Computer Software, Technology Integration, Scaffolding (Teaching Technique)
Alissa Patricia Wolters; Young-suk Grace Kim – Grantee Submission, 2023
We investigated spelling errors in English and Spanish essays by Spanish-English dual language learners in Grades 1, 2, and 3 (N = 278; 51% female) enrolled in either English immersion or English-Spanish dual immersion programs. We examined what types of spelling errors students made, whether they made spelling errors that could be due to…
Descriptors: Spelling, Spanish, English (Second Language), Second Language Learning
Wang, Elaine Lin; Matsumura, Lindsay Clare; Correnti, Richard; Litman, Diane; Zhang, Haoran; Howe, Emily; Magooda, Ahmed; Quintana, Rafael – Grantee Submission, 2020
We investigate students' implementation of the feedback messages they received in an automated writing evaluation system ("eRevise") that aims to improve students' use of text evidence in their writing. Seven 5th and 6th-grade teachers implemented "eRevise" (n = 143 students). Qualitative analysis of students' essays across…
Descriptors: Feedback (Response), Writing Evaluation, Computer Software, Grade 5
Joshua Wilson; Cristina Ahrendt; Emily A. Fudge; Alexandria Raiche; Gaysha Beard; Charles A. MacArthur – Grantee Submission, 2021
The present study used a focus group methodology to qualitatively explore elementary writing teachers' attitudes and experiences using an automated writing evaluation (AWE) system called MI Write as part of a districtwide implementation of MI Write in Grades 3-5 in 14 elementary schools. We used activity theory as a theoretical framework to answer…
Descriptors: Elementary School Teachers, Teacher Attitudes, Writing Evaluation, Writing Instruction
Roscoe, Rod D.; Wilson, Joshua; Johnson, Adam C.; Mayra, Christopher R. – Grantee Submission, 2017
Automated writing evaluation (AWE) is a popular form of educational technology designed to supplement writing instruction and feedback, yet research on the effectiveness of AWE has observed mixed findings. The current study considered how students' perceptions of automated essay scoring and feedback influenced their writing performance, revising…
Descriptors: Student Attitudes, Writing Instruction, Writing Evaluation, Feedback (Response)
Crossley, Scott A.; Kyle, Kristopher; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates the relative efficacy of using linguistic micro-features, the aggregation of such features, and a combination of micro-features and aggregated features in developing automatic essay scoring (AES) models. Although the use of aggregated features is widespread in AES systems (e.g., e-rater; Intellimetric), very little…
Descriptors: Essays, Scoring, Feedback (Response), Writing Evaluation
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Roscoe, Rod D.; Crossley, Scott A.; Snow, Erica L.; Varner, Laura K.; McNamara, Danielle S. – Grantee Submission, 2014
Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the…
Descriptors: Correlation, Essays, Scoring, Writing Evaluation