Publication Date
In 2025 | 2 |
Since 2024 | 8 |
Descriptor
Natural Language Processing | 8 |
Artificial Intelligence | 7 |
Prediction | 4 |
Automation | 3 |
French | 3 |
Models | 3 |
Reading Comprehension | 3 |
Algorithms | 2 |
Computational Linguistics | 2 |
Computer Assisted Testing | 2 |
Computer Interfaces | 2 |
More ▼ |
Author
Danielle S. McNamara | 8 |
Mihai Dascalu | 5 |
Stefan Ruseti | 4 |
Ionut Paraschiv | 2 |
Linh Huynh | 2 |
Micah Watanabe | 2 |
Andreea Dutulescu | 1 |
Bailing Lyu | 1 |
Denis Iorga | 1 |
Diego Zapata-Rivera | 1 |
Dragos-Georgian Corlatescu | 1 |
More ▼ |
Publication Type
Reports - Research | 7 |
Journal Articles | 4 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Large Language Models and Intelligent Tutoring Systems: Conflicting Paradigms and Possible Solutions

Punya Mishra; Danielle S. McNamara; Gregory Goodwin; Diego Zapata-Rivera – Grantee Submission, 2025
The advent of Large Language Models (LLMs) has fundamentally disrupted our thinking about educational technology. Their ability to engage in natural dialogue, provide contextually relevant responses, and adapt to learner needs has led many to envision them as powerful tools for personalized learning. This emergence raises important questions about…
Descriptors: Artificial Intelligence, Intelligent Tutoring Systems, Technology Uses in Education, Educational Technology
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Matthew T. McCrudden; Linh Huynh; Bailing Lyu; Jonna M. Kulikowich; Danielle S. McNamara – Grantee Submission, 2024
Readers build a mental representation of text during reading. The coherence building processes readers use to build a mental representation during reading is key to comprehension. We examined the effects of self- explanation on coherence building processes as undergraduates (n =51) read five complementary texts about natural selection and…
Descriptors: Reading Processes, Reading Comprehension, Undergraduate Students, Evolution
Dragos-Georgian Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its…
Descriptors: Computer Software, Artificial Intelligence, Learning Analytics, Natural Language Processing
Linh Huynh; Danielle S. McNamara – Grantee Submission, 2025
We conducted two experiments to assess the alignment between Generative AI (GenAI) text personalization and hypothetical readers' profiles. In Experiment 1, four LLMs (i.e., Claude 3.5 Sonnet; Llama; Gemini Pro 1.5; ChatGPT 4) were prompted to tailor 10 science texts (i.e., biology, chemistry, physics) to accommodate four different profiles…
Descriptors: Natural Language Processing, Profiles, Individual Differences, Semantics
Robert-Mihai Botarleanu; Micah Watanabe; Mihai Dascalu; Scott A. Crossley; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Age of Acquisition (AoA) scores approximate the age at which a language speaker fully understands a word's semantic meaning and represent a quantitative measure of the relative difficulty of words in a language. AoA word lists exist across various languages, with English having the most complete lists that capture the largest percentage of the…
Descriptors: Multilingualism, English (Second Language), Second Language Learning, Second Language Instruction