Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 17 |
Since 2006 (last 20 years) | 29 |
Descriptor
Source
Grantee Submission | 21 |
International Educational… | 3 |
Discourse Processes: A… | 1 |
Journal of Educational Data… | 1 |
Journal of Educational… | 1 |
Journal of Learning Analytics | 1 |
Written Communication | 1 |
Author
Publication Type
Reports - Research | 28 |
Speeches/Meeting Papers | 16 |
Journal Articles | 11 |
Tests/Questionnaires | 4 |
Reports - Evaluative | 1 |
Education Level
High Schools | 16 |
Secondary Education | 16 |
Higher Education | 15 |
Postsecondary Education | 12 |
Grade 10 | 1 |
Junior High Schools | 1 |
Audience
Researchers | 1 |
Teachers | 1 |
Location
Arizona (Phoenix) | 2 |
Arizona | 1 |
Mississippi | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 6 |
Writing Apprehension Test | 2 |
What Works Clearinghouse Rating
Allen, Laura Kristen; Magliano, Joseph P.; McCarthy, Kathryn S.; Sonia, Allison N.; Creer, Sarah D.; McNamara, Danielle S. – Grantee Submission, 2021
The current study examined the extent to which the cohesion detected in readers' constructed responses to multiple documents was predictive of persuasive, source-based essay quality. Participants (N=95) completed multiple-documents reading tasks wherein they were prompted to think-aloud, self-explain, or evaluate the sources while reading a set of…
Descriptors: Reading Comprehension, Connected Discourse, Reader Response, Natural Language Processing
Öncel, Püren; Flynn, Lauren E.; Sonia, Allison N.; Barker, Kennis E.; Lindsay, Grace C.; McClure, Caleb M.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2021
Automated Writing Evaluation systems have been developed to help students improve their writing skills through the automated delivery of both summative and formative feedback. These systems have demonstrated strong potential in a variety of educational contexts; however, they remain limited in their personalization and scope. The purpose of the…
Descriptors: Computer Assisted Instruction, Writing Evaluation, Formative Evaluation, Summative Evaluation
Sonia, Allison N.; Joseph, Magliano P.; McCarthy, Kathryn S.; Creer, Sarah D.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2022
The constructed responses individuals generate while reading can provide insights into their coherence-building processes. The current study examined how the cohesion of constructed responses relates to performance on an integrated writing task. Participants (N = 95) completed a multiple document reading task wherein they were prompted to think…
Descriptors: Natural Language Processing, Connected Discourse, Reading Processes, Writing Skills
Sonia, Allison N.; Magliano, Joseph P.; McCarthy, Kathryn S.; Creer, Sarah D.; McNamara, Danielle S.; Allen, Laura, K. – Discourse Processes: A Multidisciplinary Journal, 2022
The constructed responses individuals generate while reading can provide insights into their coherence-building processes. The current study examined how the cohesion of constructed responses relates to performance on an integrated writing task. Participants (N = 95) completed a multiple document reading task wherein they were prompted to think…
Descriptors: Natural Language Processing, Connected Discourse, Reading Processes, Writing Skills
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2017
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…
Descriptors: Artificial Intelligence, Natural Language Processing, Reading Comprehension, Literature
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – International Educational Data Mining Society, 2017
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…
Descriptors: Artificial Intelligence, Natural Language Processing, Reading Comprehension, Literature
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2017
The current study examined the degree to which the quality and characteristics of students' essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed…
Descriptors: Writing Evaluation, Essays, Persuasive Discourse, Natural Language Processing
Allen, Laura K.; Jacovina, Matthew E.; Dascalu, Mihai; Roscoe, Rod D.; Kent, Kevin M.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2016
This study investigates how and whether information about students' writing can be recovered from basic behavioral data extracted during their sessions in an intelligent tutoring system for writing. We calculate basic and time-sensitive keystroke indices based on log files of keys pressed during students' writing sessions. A corpus of prompt-based…
Descriptors: Essays, Writing Processes, Writing (Composition), Writing Instruction
Allen, Laura K.; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2016
The relationship between working memory capacity and writing ability was examined via a linguistic analysis of student essays. Undergraduate students (n = 108) wrote timed, prompt-based essays and completed a battery of cognitive assessments. The surface- and discourse-level linguistic features of students' essays were then analyzed using natural…
Descriptors: Cognitive Processes, Writing (Composition), Short Term Memory, Writing Ability
Johnson, Amy M.; McCarthy, Kathryn S.; Kopp, Kristopher J.; Perret, Cecile A.; McNamara, Danielle S. – Grantee Submission, 2017
Intelligent tutoring systems for ill-defined domains, such as reading and writing, are critically needed, yet uncommon. Two such systems, the Interactive Strategy Training for Active Reading and Thinking (iSTART) and Writing Pal (W-Pal) use natural language processing (NLP) to assess learners' written (i.e., typed) responses and provide immediate,…
Descriptors: Reading Instruction, Writing Instruction, Intelligent Tutoring Systems, Reading Strategies
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill.…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Skills
Allen, Laura K.; McNamara, Danielle S. – International Educational Data Mining Society, 2015
The current study investigates the degree to which the lexical properties of students' essays can inform stealth assessments of their vocabulary knowledge. In particular, we used indices calculated with the natural language processing tool, TAALES, to predict students' performance on a measure of vocabulary knowledge. To this end, two corpora were…
Descriptors: Vocabulary, Knowledge Level, Models, Natural Language Processing
Allen, Laura K.; Jacovina, Matthew E.; Dascalu, Mihai; Roscoe, Rod D.; Kent, Kevin M.; Likens, Aaron D.; McNamara, Danielle S. – International Educational Data Mining Society, 2016
This study investigates how and whether information about students' writing can be recovered from basic behavioral data extracted during their sessions in an intelligent tutoring system for writing. We calculate basic and time-sensitive keystroke indices based on log files of keys pressed during students' writing sessions. A corpus of prompt-based…
Descriptors: Writing Processes, Intelligent Tutoring Systems, Natural Language Processing, Feedback (Response)
Crossley, Scott A.; Kyle, Kristopher; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates the relative efficacy of using linguistic micro-features, the aggregation of such features, and a combination of micro-features and aggregated features in developing automatic essay scoring (AES) models. Although the use of aggregated features is widespread in AES systems (e.g., e-rater; Intellimetric), very little…
Descriptors: Essays, Scoring, Feedback (Response), Writing Evaluation
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of argumentative writing generally includes analyses of the specific linguistic and rhetorical features contained in the individual essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing may more accurately capture their…
Descriptors: Writing (Composition), Persuasive Discourse, Essays, Language Usage
Previous Page | Next Page »
Pages: 1 | 2