NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 16 to 30 of 64 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2018
While hierarchical machine learning approaches have been used to classify texts into different content areas, this approach has, to our knowledge, not been used in the automated assessment of text difficulty. This study compared the accuracy of four classification machine learning approaches (flat, one-vs-one, one-vs-all, and hierarchical) using…
Descriptors: Artificial Intelligence, Classification, Comparative Analysis, Prediction
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Open-ended comprehension questions are a common type of assessment used to evaluate how well students understand one of multiple documents. Our aim is to use natural language processing (NLP) to infer the level and type of inferencing within readers' answers to comprehension questions using linguistic and semantic features within their responses.…
Descriptors: Natural Language Processing, Taxonomy, Responses, Semantics
Nicula, Bogdan; Dascalu, Mihai; Newton, Natalie N.; Orcutt, Ellen; McNamara, Danielle S. – Grantee Submission, 2021
Learning to paraphrase supports both writing ability and reading comprehension, particularly for less skilled learners. As such, educational tools that integrate automated evaluations of paraphrases can be used to provide timely feedback to enhance learner paraphrasing skills more efficiently and effectively. Paraphrase identification is a popular…
Descriptors: Computational Linguistics, Feedback (Response), Classification, Learning Processes
Nicula, Bogdan; Dascalu, Mihai; Newton, Natalie; Orcutt, Ellen; McNamara, Danielle S. – Grantee Submission, 2021
The ability to automatically assess the quality of paraphrases can be very useful for facilitating literacy skills and providing timely feedback to learners. Our aim is twofold: a) to automatically evaluate the quality of paraphrases across four dimensions: lexical similarity, syntactic similarity, semantic similarity and paraphrase quality, and…
Descriptors: Phrase Structure, Networks, Semantics, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
McNamara, Danielle S. – Discourse Processes: A Multidisciplinary Journal, 2021
An overarching motivation driving my research has been to further our theoretical understanding of how readers successfully comprehend challenging text. This article describes the theoretical origins of this research program and my quest to understand comprehension processes through the use of technology. Coh-Metrix was developed to measure, and…
Descriptors: Educational Research, Reading Comprehension, Difficulty Level, Educational Technology
McNamara, Danielle S. – Grantee Submission, 2021
An overarching motivation driving my research has been to further our theoretical understanding of how readers successfully comprehend challenging text. This article describes the theoretical origins of this research program and my quest to understand comprehension processes through the use of technology. Coh-Metrix was developed to measure, and…
Descriptors: Educational Research, Reading Comprehension, Difficulty Level, Educational Technology
Dascalu, Maria-Dorinela; Ruseti, Stefan; Dascalu, Mihai; McNamara, Danielle S.; Carabas, Mihai; Rebedea, Traian – Grantee Submission, 2021
The COVID-19 pandemic has changed the entire world, while the impact and usage of online learning environments has greatly increased. This paper presents a new version of the ReaderBench framework, grounded in Cohesion Network Analysis, which can be used to evaluate the online activity of students as a plug-in feature to Moodle. A Recurrent Neural…
Descriptors: COVID-19, Pandemics, Integrated Learning Systems, School Closing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2017
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…
Descriptors: Artificial Intelligence, Natural Language Processing, Reading Comprehension, Literature
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – International Educational Data Mining Society, 2017
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…
Descriptors: Artificial Intelligence, Natural Language Processing, Reading Comprehension, Literature
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott; Kyle, Kristopher; Davenport, Jodi; McNamara, Danielle S. – International Educational Data Mining Society, 2016
This study introduces the Constructed Response Analysis Tool (CRAT), a freely available tool to automatically assess student responses in online tutoring systems. The study tests CRAT on a dataset of chemistry responses collected in the ChemVLab+. The findings indicate that CRAT can differentiate and classify student responses based on semantic…
Descriptors: Intelligent Tutoring Systems, Chemistry, Natural Language Processing, High School Students
Allen, Laura K.; Mills, Caitlin; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2019
This study examines the extent to which instructions to self-explain vs. "other"-explain a text lead readers to produce different forms of explanations. Natural language processing was used to examine the content and characteristics of the explanations produced as a function of instruction condition. Undergraduate students (n = 146)…
Descriptors: Language Processing, Science Instruction, Computational Linguistics, Teaching Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dascalu, Mihai; Jacovina, Matthew E.; Soto, Christian M.; Allen, Laura K.; Dai, Jianmin; Guerrero, Tricia A.; McNamara, Danielle S. – Grantee Submission, 2017
iSTART is a web-based reading comprehension tutor. A recent translation of iSTART from English to Spanish has made the system available to a new audience. In this paper, we outline several challenges that arose during the development process, specifically focusing on the algorithms that drive the feedback. Several iSTART activities encourage…
Descriptors: Spanish, Reading Comprehension, Natural Language Processing, Intelligent Tutoring Systems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2017
The current study examined the degree to which the quality and characteristics of students' essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed…
Descriptors: Writing Evaluation, Essays, Persuasive Discourse, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allen, Laura K.; Jacovina, Matthew E.; Dascalu, Mihai; Roscoe, Rod D.; Kent, Kevin M.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2016
This study investigates how and whether information about students' writing can be recovered from basic behavioral data extracted during their sessions in an intelligent tutoring system for writing. We calculate basic and time-sensitive keystroke indices based on log files of keys pressed during students' writing sessions. A corpus of prompt-based…
Descriptors: Essays, Writing Processes, Writing (Composition), Writing Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kopp, Kristopher J.; Johnson, Amy M.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2017
An NLP algorithm was developed to assess question quality to inform feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). A corpus of 4575 questions was coded using a four-level taxonomy. NLP indices were calculated for each question and machine learning was used to predict…
Descriptors: Reading Comprehension, Reading Instruction, Intelligent Tutoring Systems, Reading Strategies
Pages: 1  |  2  |  3  |  4  |  5