Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 18 |
Descriptor
Author
Publication Type
Reports - Research | 16 |
Speeches/Meeting Papers | 12 |
Journal Articles | 4 |
Reports - Descriptive | 2 |
Books | 1 |
Education Level
High Schools | 8 |
Secondary Education | 7 |
Higher Education | 3 |
Postsecondary Education | 2 |
Grade 9 | 1 |
Audience
Location
Arizona (Phoenix) | 2 |
Mississippi | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 3 |
Test of English as a Foreign… | 1 |
Writing Apprehension Test | 1 |
What Works Clearinghouse Rating
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Theories of discourse argue that comprehension depends on the coherence of the learner's mental representation. Our aim is to create a reliable automated representation to estimate readers' level of comprehension based on different productions, namely self-explanations and answers to open-ended questions. Previous work relied on Cohesion Network…
Descriptors: Network Analysis, Reading Comprehension, Automation, Artificial Intelligence
Botarleanu, Robert-Mihai; Dascalu, Mihai; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2020
A key writing skill is the capability to clearly convey desired meaning using available linguistic knowledge. Consequently, writers must select from a large array of idioms, vocabulary terms that are semantically equivalent, and discourse features that simultaneously reflect content and allow readers to grasp meaning. In many cases, a simplified…
Descriptors: Natural Language Processing, Writing Skills, Difficulty Level, Reading Comprehension
McCarthy, Kathryn S.; Roscoe, Rod D.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2019
This study investigated the effect of incorporating spelling and grammar checking tools within an automated writing tutoring system, Writing Pal. High school students (n = 119) wrote and revised six persuasive essays. After initial drafts, all students received formative feedback about writing strategies. Half of the participants were also given…
Descriptors: Spelling, Grammar, Automation, Writing Instruction
Dascalu, Mihai; Allen, Laura K.; McNamara, Danielle S.; Trausan-Matu, Stefan; Crossley, Scott A. – Grantee Submission, 2017
Dialogism provides the grounds for building a comprehensive model of discourse and it is focused on the multiplicity of perspectives (i.e., voices). Dialogism can be present in any type of text, while voices become themes or recurrent topics emerging from the discourse. In this study, we examine the extent that differences between…
Descriptors: Dialogs (Language), Protocol Analysis, Discourse Analysis, Automation
Allen, Laura K.; Jacovina, Matthew E.; Johnson, Adam C.; McNamara, Danielle S.; Roscoe, Rod D. – Grantee Submission, 2016
Revising is an essential writing process yet automated writing evaluation systems tend to give feedback on discrete essay drafts rather than changes across drafts. We explore the feasibility of automated revision detection and its potential to guide feedback. Relationships between revising behaviors and linguistic features of students' essays are…
Descriptors: Revision (Written Composition), Automation, Writing Evaluation, Feedback (Response)
Allen, Laura K.; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2016
The relationship between working memory capacity and writing ability was examined via a linguistic analysis of student essays. Undergraduate students (n = 108) wrote timed, prompt-based essays and completed a battery of cognitive assessments. The surface- and discourse-level linguistic features of students' essays were then analyzed using natural…
Descriptors: Cognitive Processes, Writing (Composition), Short Term Memory, Writing Ability
McNamara, Danielle S.; Graesser, Arthur C.; McCarthy, Philip M.; Cai, Zhiqiang – Cambridge University Press, 2014
Coh-Metrix is among the broadest and most sophisticated automated textual assessment tools available today. Automated Evaluation of Text and Discourse with Coh-Metrix describes this computational tool, as well as the wide range of language and discourse measures it provides. Section I of the book focuses on the theoretical perspectives that led to…
Descriptors: Writing Evaluation, Computational Linguistics, Connected Discourse, Data Analysis
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of argumentative writing generally includes analyses of the specific linguistic and rhetorical features contained in the individual essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing may more accurately capture their…
Descriptors: Writing (Composition), Persuasive Discourse, Essays, Language Usage
Crossley, Scott; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates a new approach to automatically assessing essay quality that combines traditional approaches based on assessing textual features with new approaches that measure student attributes such as demographic information, standardized test scores, and survey results. The results demonstrate that combining both text features and…
Descriptors: Automation, Scoring, Essays, Evaluation Methods
Guo, Liang; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2013
This study explores whether linguistic features can predict second language writing proficiency in the Test of English as a Foreign Language (TOEFL iBT) integrated and independent writing tasks and, if so, whether there are differences and similarities in the two sets of predictive linguistic features. Linguistic features related to lexical…
Descriptors: English (Second Language), Linguistics, Second Language Learning, Writing Skills
Crossley, Scott A.; Varner, Laura K.; McNamara, Danielle S. – Grantee Submission, 2013
Linguistic properties of writing prompts have been shown to influence the writing patterns contained in student essays. The majority of previous research on these prompt-based effects has focused on the lexical and syntactic properties of writing prompts and essays. The current study expands this research by investigating the effects of prompt…
Descriptors: Persuasive Discourse, Prompting, Writing Instruction, Essays
Roscoe, Rod D.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2013
This study investigates students' essay revising in the context of an intelligent tutoring system called "Writing Pal" (W-Pal), which combines strategy instruction, game-based practice, essay writing practice, and automated formative feedback. We examine how high school students use W-Pal feedback to revise essays in two different…
Descriptors: Essays, Revision (Written Composition), Intelligent Tutoring Systems, Educational Games
Snow, Erica L.; Allen, Laura K.; Jacovina, Matthew E.; Crossley, Scott A.; Perret, Cecile A.; McNamara, Danielle S. – Journal of Learning Analytics, 2015
Writing researchers have suggested that students who are perceived as strong writers (i.e., those who generate texts rated as high quality) demonstrate flexibility in their writing style. While anecdotally this has been a commonly held belief among researchers and educators, there is little empirical research to support this claim. This study…
Descriptors: Writing (Composition), Writing Strategies, Hypothesis Testing, Essays
Previous Page | Next Page »
Pages: 1 | 2