Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 3 |
Descriptor
Computational Linguistics | 3 |
Contrastive Linguistics | 3 |
Language Processing | 3 |
Computer Software | 2 |
Language Usage | 2 |
Models | 2 |
Written Language | 2 |
Anatomy | 1 |
Artificial Intelligence | 1 |
Chinese | 1 |
Coding | 1 |
More ▼ |
Source
Grantee Submission | 3 |
Author
Arun-Balajiee… | 1 |
David C. Plaut | 1 |
Jeevan Chapagain | 1 |
Mohammad Hassany | 1 |
Olney, Andrew M. | 1 |
Patience Stevens | 1 |
Priti Oli | 1 |
Rabin Banjade | 1 |
Vasile Rus | 1 |
Publication Type
Reports - Research | 2 |
Speeches/Meeting Papers | 2 |
Journal Articles | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Flesch Reading Ease Formula | 1 |
What Works Clearinghouse Rating

Arun-Balajiee Lekshmi-Narayanan; Priti Oli; Jeevan Chapagain; Mohammad Hassany; Rabin Banjade; Vasile Rus – Grantee Submission, 2024
Worked examples, which present an explained code for solving typical programming problems are among the most popular types of learning content in programming classes. Most approaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide…
Descriptors: Coding, Computer Science Education, Computational Linguistics, Artificial Intelligence
Patience Stevens; David C. Plaut – Grantee Submission, 2022
The morphological structure of complex words impacts how they are processed during visual word recognition. This impact varies over the course of reading acquisition and for different languages and writing systems. Many theories of morphological processing rely on a decomposition mechanism, in which words are decomposed into explicit…
Descriptors: Written Language, Morphology (Languages), Word Recognition, Reading Processes
Olney, Andrew M. – Grantee Submission, 2021
This paper explores a general approach to paraphrase generation using a pre-trained seq2seq model fine-tuned using a back-translated anatomy and physiology textbook. Human ratings indicate that the paraphrase model generally preserved meaning and grammaticality/fluency: 70% of meaning ratings were above 75, and 40% of paraphrases were considered…
Descriptors: Translation, Language Processing, Error Analysis (Language), Grammar