Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 13 |
Descriptor
Author
Publication Type
Speeches/Meeting Papers | 18 |
Reports - Research | 14 |
Reports - Descriptive | 2 |
Journal Articles | 1 |
Opinion Papers | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 7 |
Postsecondary Education | 6 |
Secondary Education | 4 |
High Schools | 3 |
Grade 10 | 1 |
Audience
Practitioners | 1 |
Teachers | 1 |
Location
Arizona (Phoenix) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Writing Apprehension Test | 1 |
What Works Clearinghouse Rating

Yang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata
David W. Brown; Dean Jensen – International Society for Technology, Education, and Science, 2023
The growth of Artificial Intelligence (AI) chatbots has created a great deal of discussion in the education community. While many have gravitated towards the ability of these bots to make learning more interactive, others have grave concerns that student created essays, long used as a means of assessing the subject comprehension of students, may…
Descriptors: Artificial Intelligence, Natural Language Processing, Computer Software, Writing (Composition)
Wan, Qian; Crossley, Scott; Allen, Laura; McNamara, Danielle – Grantee Submission, 2020
In this paper, we extracted content-based and structure-based features of text to predict human annotations for claims and nonclaims in argumentative essays. We compared Logistic Regression, Bernoulli Naive Bayes, Gaussian Naive Bayes, Linear Support Vector Classification, Random Forest, and Neural Networks to train classification models. Random…
Descriptors: Persuasive Discourse, Essays, Writing Evaluation, Natural Language Processing
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Öncel, Püren; Flynn, Lauren E.; Sonia, Allison N.; Barker, Kennis E.; Lindsay, Grace C.; McClure, Caleb M.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2021
Automated Writing Evaluation systems have been developed to help students improve their writing skills through the automated delivery of both summative and formative feedback. These systems have demonstrated strong potential in a variety of educational contexts; however, they remain limited in their personalization and scope. The purpose of the…
Descriptors: Computer Assisted Instruction, Writing Evaluation, Formative Evaluation, Summative Evaluation
Burstein, Jill; McCaffrey, Daniel; Beigman Klebanov, Beata; Ling, Guangming; Holtzman, Steven – Grantee Submission, 2019
Writing is a challenge and a potential obstacle for students in U.S. 4-year postsecondary institutions lacking prerequisite writing skills. This study aims to address the research question: Is there a relationship between specific features (analytics) in coursework writing and broader success predictors? Knowledge about this relationship could…
Descriptors: Undergraduate Students, Writing (Composition), Writing Evaluation, Learning Analytics
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2017
The current study examined the degree to which the quality and characteristics of students' essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed…
Descriptors: Writing Evaluation, Essays, Persuasive Discourse, Natural Language Processing
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill.…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Skills
Allen, Laura K.; Jacovina, Matthew E.; Dascalu, Mihai; Roscoe, Rod D.; Kent, Kevin M.; Likens, Aaron D.; McNamara, Danielle S. – International Educational Data Mining Society, 2016
This study investigates how and whether information about students' writing can be recovered from basic behavioral data extracted during their sessions in an intelligent tutoring system for writing. We calculate basic and time-sensitive keystroke indices based on log files of keys pressed during students' writing sessions. A corpus of prompt-based…
Descriptors: Writing Processes, Intelligent Tutoring Systems, Natural Language Processing, Feedback (Response)
Allen, Laura K.; Mills, Caitlin; Jacovina, Matthew E.; Crossley, Scott; D'Mello, Sidney; McNamara, Danielle S. – Grantee Submission, 2016
Writing training systems have been developed to provide students with instruction and deliberate practice on their writing. Although generally successful in providing accurate scores, a common criticism of these systems is their lack of personalization and adaptive instruction. In particular, these systems tend to place the strongest emphasis on…
Descriptors: Learner Engagement, Psychological Patterns, Writing Instruction, Essays
Allen, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2015
We investigated linguistic factors that relate to misalignment between students' and teachers' ratings of essay quality. Students (n = 126) wrote essays and rated the quality of their work. Teachers then provided their own ratings of the essays. Results revealed that students who were less accurate in their self-assessments produced essays that…
Descriptors: Essays, Scores, Natural Language Processing, Interrater Reliability
Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2014
This study explores correlations between human ratings of essay quality and component scores based on similar natural language processing indices and weighted through a principal component analysis. The results demonstrate that such component scores show small to large effects with human ratings and thus may be suitable to providing both summative…
Descriptors: Essays, Computer Assisted Testing, Writing Evaluation, Scores
Crossley, Scott A.; Varner, Laura K.; Roscoe, Rod D.; McNamara, Danielle S. – Grantee Submission, 2013
We present an evaluation of the Writing Pal (W-Pal) intelligent tutoring system (ITS) and the W-Pal automated writing evaluation (AWE) system through the use of computational indices related to text cohesion. Sixty-four students participated in this study. Each student was assigned to either the W-Pal ITS condition or the W-Pal AWE condition. The…
Descriptors: Intelligent Tutoring Systems, Automation, Writing Evaluation, Writing Assignments
King, Mary – 1983
A text's meaning is, in part, independent of its form. Reading, most of the time, is taking meaning--not words--from the printed page, while proofreading requires attention to form rather than meaning. The author notes that: (1) a meaningful passage is easier to read than one with less meaning; (2) errors in oral reading usually do not obscure a…
Descriptors: Language Processing, Reading Comprehension, Revision (Written Composition), Writing Evaluation
Coe, Richard M. – 1984
An assignment given to students at the end of an advanced composition class empowers students by helping them grasp principles and develop abilities that allow them to get beyond needing teachers. The crux of the assignment is a heuristic for analyzing any particular type of writing for the purpose of learning to produce it. The students are…
Descriptors: Content Analysis, Heuristics, Higher Education, Language Processing
Previous Page | Next Page »
Pages: 1 | 2