Publication Date
In 2025 | 3 |
Since 2024 | 10 |
Since 2021 (last 5 years) | 29 |
Since 2016 (last 10 years) | 45 |
Since 2006 (last 20 years) | 47 |
Descriptor
Source
Grantee Submission | 47 |
Author
Danielle S. McNamara | 15 |
McNamara, Danielle S. | 12 |
Mihai Dascalu | 9 |
Dascalu, Mihai | 6 |
Renu Balyan | 6 |
Allen, Laura K. | 4 |
Balyan, Renu | 4 |
McCarthy, Kathryn S. | 4 |
Stefan Ruseti | 4 |
Tracy Arner | 4 |
Crossley, Scott A. | 3 |
More ▼ |
Publication Type
Reports - Research | 37 |
Speeches/Meeting Papers | 23 |
Journal Articles | 12 |
Reports - Descriptive | 6 |
Reports - Evaluative | 4 |
Education Level
Higher Education | 8 |
Postsecondary Education | 8 |
Elementary Education | 6 |
Secondary Education | 4 |
Junior High Schools | 3 |
Middle Schools | 3 |
Early Childhood Education | 2 |
Grade 2 | 2 |
Grade 7 | 2 |
Grade 8 | 2 |
High Schools | 2 |
More ▼ |
Audience
Location
California | 2 |
Florida | 1 |
Romania | 1 |
Tennessee | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Flesch Kincaid Grade Level… | 2 |
Flesch Reading Ease Formula | 1 |
Woodcock Johnson Tests of… | 1 |
What Works Clearinghouse Rating
Large Language Models and Intelligent Tutoring Systems: Conflicting Paradigms and Possible Solutions

Punya Mishra; Danielle S. McNamara; Gregory Goodwin; Diego Zapata-Rivera – Grantee Submission, 2025
The advent of Large Language Models (LLMs) has fundamentally disrupted our thinking about educational technology. Their ability to engage in natural dialogue, provide contextually relevant responses, and adapt to learner needs has led many to envision them as powerful tools for personalized learning. This emergence raises important questions about…
Descriptors: Artificial Intelligence, Intelligent Tutoring Systems, Technology Uses in Education, Educational Technology

Clayton Cohn; Surya Rayala; Caitlin Snyder; Joyce Horn Fonteles; Shruti Jain; Naveeduddin Mohammed; Umesh Timalsina; Sarah K. Burriss; Ashwin T. S.; Namrata Srivastava; Menton Deweese; Angela Eeds; Gautam Biswas – Grantee Submission, 2025
Collaborative dialogue offers rich insights into students' learning and critical thinking. This is essential for adapting pedagogical agents to students' learning and problem-solving skills in STEM+C settings. While large language models (LLMs) facilitate dynamic pedagogical interactions, potential hallucinations can undermine confidence, trust,…
Descriptors: STEM Education, Computer Science Education, Artificial Intelligence, Natural Language Processing
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing

Yang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Ying Fang; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Dragos-Georgian Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its…
Descriptors: Computer Software, Artificial Intelligence, Learning Analytics, Natural Language Processing
Muhsin Menekse – Grantee Submission, 2023
Generative artificial intelligence (AI) technologies, such as large language models (LLMs) and diffusion model image and video generators, can transform learning and teaching experiences by providing students and instructors with access to a vast amount of information and create innovative learning and teaching materials in a very efficient way…
Descriptors: Educational Trends, Engineering Education, Artificial Intelligence, Technology Uses in Education
Linh Huynh; Danielle S. McNamara – Grantee Submission, 2025
We conducted two experiments to assess the alignment between Generative AI (GenAI) text personalization and hypothetical readers' profiles. In Experiment 1, four LLMs (i.e., Claude 3.5 Sonnet; Llama; Gemini Pro 1.5; ChatGPT 4) were prompted to tailor 10 science texts (i.e., biology, chemistry, physics) to accommodate four different profiles…
Descriptors: Natural Language Processing, Profiles, Individual Differences, Semantics

Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2023
This paper systematically explores how Large Language Models (LLMs) generate explanations of code examples of the type used in intro-to-programming courses. As we show, the nature of code explanations generated by LLMs varies considerably based on the wording of the prompt, the target code examples being explained, the programming language, the…
Descriptors: Computational Linguistics, Programming, Computer Science Education, Programming Languages
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms
Renu Balyan; Danielle S. McNamara; Scott A. Crossley; William Brown; Andrew J. Karter; Dean Schillinger – Grantee Submission, 2022
Online patient portals that facilitate communication between patient and provider can improve patients' medication adherence and health outcomes. The effectiveness of such web-based communication measures can be influenced by the health literacy (HL) of a patient. In the context of diabetes, low HL is associated with severe hypoglycemia and high…
Descriptors: Computational Linguistics, Patients, Physicians, Information Security
Bogdan Nicula; Mihai Dascalu; Tracy Arner; Renu Balyan; Danielle S. McNamara – Grantee Submission, 2023
Text comprehension is an essential skill in today's information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers while…
Descriptors: Reading Comprehension, Language Processing, Models, STEM Education

Ha Tien Nguyen; Conrad Borchers; Meng Xia; Vincent Aleven – Grantee Submission, 2024
Intelligent tutoring systems (ITS) can help students learn successfully, yet little work has explored the role of caregivers in shaping that success. Past interventions to support caregivers in supporting their child's homework have been largely disjunct from educational technology. The paper presents prototyping design research with nine middle…
Descriptors: Middle School Mathematics, Intelligent Tutoring Systems, Caregivers, Caregiver Attitudes
Bogdan Nicula; Marilena Panaite; Tracy Arner; Renu Balyan; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Self-explanation practice is an effective method to support students in better understanding complex texts. This study focuses on automatically assessing the comprehension strategies employed by readers while understanding STEM texts. Data from 3 datasets (N = 11,833) with self-explanations annotated on different comprehension strategies (i.e.,…
Descriptors: Reading Strategies, Reading Comprehension, Metacognition, STEM Education