NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Ying Fang; Tong Li; Linh Huynh; Katerina Christhilf; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Literacy assessment is essential for effective literacy instruction and training. However, traditional paper-based literacy assessments are typically decontextualized and may cause stress and anxiety for test takers. In contrast, serious games and game environments allow for the assessment of literacy in more authentic and engaging ways, which has…
Descriptors: Literacy, Student Evaluation, Educational Games, Literacy Education
Chen, Su; Fang, Ying; Shi, Genghu; Sabatini, John; Greenberg, Daphne; Frijters, Jan; Graesser, Arthur C. – Grantee Submission, 2021
This paper describes a new automated disengagement tracking system (DTS) that detects learners' maladaptive behaviors, e.g. mind-wandering and impetuous responding, in an intelligent tutoring system (ITS), called AutoTutor. AutoTutor is a conversation-based intelligent tutoring system designed to help adult literacy learners improve their reading…
Descriptors: Intelligent Tutoring Systems, Artificial Intelligence, Attention, Adult Literacy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2015
This study investigates a new approach to automatically assessing essay quality that combines traditional approaches based on assessing textual features with new approaches that measure student attributes such as demographic information, standardized test scores, and survey results. The results demonstrate that combining both text features and…
Descriptors: Automation, Scoring, Essays, Evaluation Methods
Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2015
This study builds upon previous work aimed at developing a student model of reading comprehension ability within the intelligent tutoring system, iSTART. Currently, the system evaluates students' self-explanation performance using a local, sentence-level algorithm and does not adapt content based on reading ability. The current study leverages…
Descriptors: Reading Comprehension, Reading Skills, Natural Language Processing, Intelligent Tutoring Systems
Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2016
A commonly held belief among educators, researchers, and students is that high-quality texts are easier to read than low-quality texts, as they contain more engaging narrative and story-like elements. Interestingly, these assumptions have typically failed to be supported by the literature on writing. Previous research suggests that higher quality…
Descriptors: Role, Writing (Composition), Natural Language Processing, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Journal of Educational Psychology, 2016
A commonly held belief among educators, researchers, and students is that high-quality texts are easier to read than low-quality texts, as they contain more engaging narrative and story-like elements. Interestingly, these assumptions have typically failed to be supported by the literature on writing. Previous research suggests that higher quality…
Descriptors: Role, Writing (Composition), Natural Language Processing, Hypothesis Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Varner, Laura K.; Jackson, G. Tanner; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2013
This study expands upon an existing model of students' reading comprehension ability within an intelligent tutoring system. The current system evaluates students' natural language input using a local student model. We examine the potential to expand this model by assessing the linguistic features of self-explanations aggregated across entire…
Descriptors: Reading Comprehension, Intelligent Tutoring Systems, Natural Language Processing, Reading Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crossley, Scott A.; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Journal of Educational Data Mining, 2016
This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text…
Descriptors: Essays, Scoring, Writing Evaluation, Natural Language Processing