Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 11 |
Descriptor
Author
Publication Type
| Reports - Research | 10 |
| Speeches/Meeting Papers | 7 |
| Journal Articles | 3 |
| Reports - Descriptive | 1 |
Education Level
| High Schools | 5 |
| Secondary Education | 5 |
| Higher Education | 2 |
| Postsecondary Education | 1 |
Audience
Location
| Arizona (Phoenix) | 2 |
| Mississippi | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Gates MacGinitie Reading Tests | 3 |
| Writing Apprehension Test | 1 |
What Works Clearinghouse Rating
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Theories of discourse argue that comprehension depends on the coherence of the learner's mental representation. Our aim is to create a reliable automated representation to estimate readers' level of comprehension based on different productions, namely self-explanations and answers to open-ended questions. Previous work relied on Cohesion Network…
Descriptors: Network Analysis, Reading Comprehension, Automation, Artificial Intelligence
Botarleanu, Robert-Mihai; Dascalu, Mihai; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2020
A key writing skill is the capability to clearly convey desired meaning using available linguistic knowledge. Consequently, writers must select from a large array of idioms, vocabulary terms that are semantically equivalent, and discourse features that simultaneously reflect content and allow readers to grasp meaning. In many cases, a simplified…
Descriptors: Natural Language Processing, Writing Skills, Difficulty Level, Reading Comprehension
Allen, Laura K.; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2016
The relationship between working memory capacity and writing ability was examined via a linguistic analysis of student essays. Undergraduate students (n = 108) wrote timed, prompt-based essays and completed a battery of cognitive assessments. The surface- and discourse-level linguistic features of students' essays were then analyzed using natural…
Descriptors: Cognitive Processes, Writing (Composition), Short Term Memory, Writing Ability
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of argumentative writing generally includes analyses of the specific linguistic and rhetorical features contained in the individual essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing may more accurately capture their…
Descriptors: Writing (Composition), Persuasive Discourse, Essays, Language Usage
Crossley, Scott; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates a new approach to automatically assessing essay quality that combines traditional approaches based on assessing textual features with new approaches that measure student attributes such as demographic information, standardized test scores, and survey results. The results demonstrate that combining both text features and…
Descriptors: Automation, Scoring, Essays, Evaluation Methods
Snow, Erica L.; Allen, Laura K.; Jacovina, Matthew E.; Crossley, Scott A.; Perret, Cecile A.; McNamara, Danielle S. – Journal of Learning Analytics, 2015
Writing researchers have suggested that students who are perceived as strong writers (i.e., those who generate texts rated as high quality) demonstrate flexibility in their writing style. While anecdotally this has been a commonly held belief among researchers and educators, there is little empirical research to support this claim. This study…
Descriptors: Writing (Composition), Writing Strategies, Hypothesis Testing, Essays
Snow, Erica L.; Allen, Laura K.; Jacovina, Matthew E.; Crossley, Scott A.; Perret, Cecile A.; McNamara, Danielle S. – Grantee Submission, 2015
Writing researchers have suggested that students who are perceived as strong writers (i.e., those who generate texts rated as high quality) demonstrate flexibility in their writing style. While anecdotally this has been a commonly held belief among researchers and educators, there is little empirical research to support this claim. This study…
Descriptors: Writing (Composition), Writing Strategies, Hypothesis Testing, Essays
McNamara, Danielle S.; Crossley, Scott A.; Roscoe, Rod – Grantee Submission, 2013
The Writing Pal is an intelligent tutoring system that provides writing strategy training. A large part of its artificial intelligence resides in the natural language processing algorithms to assess essay quality and guide feedback to students. Because writing is often highly nuanced and subjective, the development of these algorithms must…
Descriptors: Intelligent Tutoring Systems, Natural Language Processing, Writing Instruction, Feedback (Response)
Crossley, Scott A.; Varner, Laura K.; Roscoe, Rod D.; McNamara, Danielle S. – Grantee Submission, 2013
We present an evaluation of the Writing Pal (W-Pal) intelligent tutoring system (ITS) and the W-Pal automated writing evaluation (AWE) system through the use of computational indices related to text cohesion. Sixty-four students participated in this study. Each student was assigned to either the W-Pal ITS condition or the W-Pal AWE condition. The…
Descriptors: Intelligent Tutoring Systems, Automation, Writing Evaluation, Writing Assignments

Peer reviewed
Direct link
